Test Report: QEMU_macOS 19774

                    
                      95efbc930ecf4c942ef544a2e8709bfd2a544710:2024-10-08:36559
                    
                

Test fail (156/258)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 41.81
7 TestDownloadOnly/v1.20.0/kubectl 0
22 TestOffline 10.33
27 TestAddons/Setup 10.15
28 TestCertOptions 10.16
29 TestCertExpiration 197.44
30 TestDockerFlags 10.09
31 TestForceSystemdFlag 10.05
32 TestForceSystemdEnv 10.13
38 TestErrorSpam/setup 9.98
47 TestFunctional/serial/StartWithProxy 9.89
49 TestFunctional/serial/SoftStart 5.27
50 TestFunctional/serial/KubeContext 0.06
51 TestFunctional/serial/KubectlGetPods 0.06
58 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.06
59 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
61 TestFunctional/serial/MinikubeKubectlCmd 0.77
62 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.25
63 TestFunctional/serial/ExtraConfig 5.28
64 TestFunctional/serial/ComponentHealth 0.06
65 TestFunctional/serial/LogsCmd 0.08
66 TestFunctional/serial/LogsFileCmd 0.07
67 TestFunctional/serial/InvalidService 0.03
70 TestFunctional/parallel/DashboardCmd 0.21
73 TestFunctional/parallel/StatusCmd 0.14
77 TestFunctional/parallel/ServiceCmdConnect 0.14
79 TestFunctional/parallel/PersistentVolumeClaim 0.03
81 TestFunctional/parallel/SSHCmd 0.13
82 TestFunctional/parallel/CpCmd 0.3
84 TestFunctional/parallel/FileSync 0.08
85 TestFunctional/parallel/CertSync 0.34
89 TestFunctional/parallel/NodeLabels 0.06
91 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
95 TestFunctional/parallel/Version/components 0.05
96 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
97 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
98 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
99 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
100 TestFunctional/parallel/ImageCommands/ImageBuild 0.13
102 TestFunctional/parallel/DockerEnv/bash 0.05
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.05
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.05
106 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.3
107 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
108 TestFunctional/parallel/ServiceCmd/List 0.05
109 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.3
110 TestFunctional/parallel/ServiceCmd/JSONOutput 0.06
111 TestFunctional/parallel/ServiceCmd/HTTPS 0.05
112 TestFunctional/parallel/ServiceCmd/Format 0.05
113 TestFunctional/parallel/ServiceCmd/URL 0.05
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.21
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.09
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 113.71
121 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
123 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.07
133 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 30.05
141 TestMultiControlPlane/serial/StartCluster 9.84
142 TestMultiControlPlane/serial/DeployApp 102.71
143 TestMultiControlPlane/serial/PingHostFromPods 0.1
144 TestMultiControlPlane/serial/AddWorkerNode 0.08
145 TestMultiControlPlane/serial/NodeLabels 0.06
146 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.09
147 TestMultiControlPlane/serial/CopyFile 0.07
148 TestMultiControlPlane/serial/StopSecondaryNode 0.13
149 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.09
150 TestMultiControlPlane/serial/RestartSecondaryNode 57.45
151 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.09
152 TestMultiControlPlane/serial/RestartClusterKeepsNodes 7.32
153 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
154 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.08
155 TestMultiControlPlane/serial/StopCluster 3.28
156 TestMultiControlPlane/serial/RestartCluster 5.26
157 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.09
158 TestMultiControlPlane/serial/AddSecondaryNode 0.08
159 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.09
162 TestImageBuild/serial/Setup 9.89
165 TestJSONOutput/start/Command 9.85
171 TestJSONOutput/pause/Command 0.09
177 TestJSONOutput/unpause/Command 0.05
194 TestMinikubeProfile 10.1
197 TestMountStart/serial/StartWithMountFirst 10.65
200 TestMultiNode/serial/FreshStart2Nodes 10.06
201 TestMultiNode/serial/DeployApp2Nodes 84.69
202 TestMultiNode/serial/PingHostFrom2Pods 0.1
203 TestMultiNode/serial/AddNode 0.08
204 TestMultiNode/serial/MultiNodeLabels 0.06
205 TestMultiNode/serial/ProfileList 0.09
206 TestMultiNode/serial/CopyFile 0.07
207 TestMultiNode/serial/StopNode 0.15
208 TestMultiNode/serial/StartAfterStop 43.95
209 TestMultiNode/serial/RestartKeepsNodes 7.24
210 TestMultiNode/serial/DeleteNode 0.12
211 TestMultiNode/serial/StopMultiNode 4.06
212 TestMultiNode/serial/RestartMultiNode 5.27
213 TestMultiNode/serial/ValidateNameConflict 20.18
217 TestPreload 10.07
219 TestScheduledStopUnix 10.04
220 TestSkaffold 17.52
223 TestRunningBinaryUpgrade 642.1
225 TestKubernetesUpgrade 17.45
239 TestStoppedBinaryUpgrade/Upgrade 608.51
249 TestPause/serial/Start 10.25
252 TestNoKubernetes/serial/StartWithK8s 10.06
253 TestNoKubernetes/serial/StartWithStopK8s 6.01
254 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 1.84
255 TestNoKubernetes/serial/Start 6.48
256 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 1.35
260 TestNoKubernetes/serial/StartNoArgs 5.92
262 TestNetworkPlugins/group/auto/Start 9.99
263 TestNetworkPlugins/group/kindnet/Start 9.77
264 TestNetworkPlugins/group/calico/Start 9.83
265 TestNetworkPlugins/group/custom-flannel/Start 9.86
266 TestNetworkPlugins/group/false/Start 9.95
267 TestNetworkPlugins/group/enable-default-cni/Start 10.02
268 TestNetworkPlugins/group/flannel/Start 9.78
269 TestNetworkPlugins/group/bridge/Start 9.83
270 TestNetworkPlugins/group/kubenet/Start 9.88
272 TestStartStop/group/old-k8s-version/serial/FirstStart 9.86
273 TestStartStop/group/old-k8s-version/serial/DeployApp 0.1
274 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
277 TestStartStop/group/old-k8s-version/serial/SecondStart 5.28
278 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.04
279 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
280 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.08
281 TestStartStop/group/old-k8s-version/serial/Pause 0.12
283 TestStartStop/group/no-preload/serial/FirstStart 10.04
284 TestStartStop/group/no-preload/serial/DeployApp 0.1
285 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
288 TestStartStop/group/no-preload/serial/SecondStart 5.26
289 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.04
290 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
291 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.08
292 TestStartStop/group/no-preload/serial/Pause 0.11
294 TestStartStop/group/embed-certs/serial/FirstStart 9.85
295 TestStartStop/group/embed-certs/serial/DeployApp 0.1
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
299 TestStartStop/group/embed-certs/serial/SecondStart 5.27
300 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
301 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
302 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.08
303 TestStartStop/group/embed-certs/serial/Pause 0.11
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 10.06
307 TestStartStop/group/newest-cni/serial/FirstStart 9.94
308 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.1
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
315 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.27
317 TestStartStop/group/newest-cni/serial/SecondStart 5.26
318 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
319 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
320 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.08
321 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
324 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
325 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (41.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-430000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-430000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (41.813244s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"64c83491-dc7c-4988-8fdf-414aaefa478c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-430000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a645db8c-ee04-499c-a0ae-c9aed12f3038","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19774"}}
	{"specversion":"1.0","id":"5f4f4126-34ce-4dd5-a466-61e382533071","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig"}}
	{"specversion":"1.0","id":"7dce9673-38df-4418-946d-7cd8984bb98a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"19290f11-6b47-4318-b7a7-e2332c81cefb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d546df77-6ba0-40e6-8a85-aec0ff29ed9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube"}}
	{"specversion":"1.0","id":"5572bb05-d94b-4e7c-93eb-293a03341c0b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"5c3138fe-6350-468e-931f-41bd98687489","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e56f14a6-6bae-4ecd-a6d5-1ebb72de2fbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"ad4bb11b-184d-427e-80bd-5d3a6c0a3e40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"02d0b4c9-5837-4209-a991-1a94a6ea2076","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-430000\" primary control-plane node in \"download-only-430000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"72960c19-f66c-4064-9833-4224645d999c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0f36ec8-9155-41a8-8e31-707c2e0ac1eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1093fcfa0 0x1093fcfa0 0x1093fcfa0 0x1093fcfa0 0x1093fcfa0 0x1093fcfa0 0x1093fcfa0] Decompressors:map[bz2:0x1400078ab20 gz:0x1400078ab28 tar:0x1400078aad0 tar.bz2:0x1400078aae0 tar.gz:0x1400078aaf0 tar.xz:0x1400078ab00 tar.zst:0x1400078ab10 tbz2:0x1400078aae0 tgz:0x14
00078aaf0 txz:0x1400078ab00 tzst:0x1400078ab10 xz:0x1400078ab40 zip:0x1400078ab60 zst:0x1400078ab48] Getters:map[file:0x140001857a0 http:0x140000dae60 https:0x140000dafa0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"aa60c447-135b-436e-8235-a741f7254258","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:42:09.511836    6908 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:42:09.512027    6908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:42:09.512030    6908 out.go:358] Setting ErrFile to fd 2...
	I1008 10:42:09.512033    6908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:42:09.512150    6908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	W1008 10:42:09.512222    6908 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19774-6384/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19774-6384/.minikube/config/config.json: no such file or directory
	I1008 10:42:09.513677    6908 out.go:352] Setting JSON to true
	I1008 10:42:09.531464    6908 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4299,"bootTime":1728405030,"procs":565,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:42:09.531530    6908 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:42:09.536986    6908 out.go:97] [download-only-430000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:42:09.537110    6908 notify.go:220] Checking for updates...
	W1008 10:42:09.537171    6908 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball: no such file or directory
	I1008 10:42:09.540956    6908 out.go:169] MINIKUBE_LOCATION=19774
	I1008 10:42:09.543910    6908 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:42:09.547964    6908 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:42:09.550989    6908 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:42:09.553886    6908 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	W1008 10:42:09.559954    6908 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1008 10:42:09.560147    6908 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:42:09.562973    6908 out.go:97] Using the qemu2 driver based on user configuration
	I1008 10:42:09.562991    6908 start.go:297] selected driver: qemu2
	I1008 10:42:09.563016    6908 start.go:901] validating driver "qemu2" against <nil>
	I1008 10:42:09.563080    6908 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 10:42:09.565917    6908 out.go:169] Automatically selected the socket_vmnet network
	I1008 10:42:09.571419    6908 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1008 10:42:09.571505    6908 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 10:42:09.571543    6908 cni.go:84] Creating CNI manager for ""
	I1008 10:42:09.571579    6908 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1008 10:42:09.571638    6908 start.go:340] cluster config:
	{Name:download-only-430000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-430000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:42:09.576060    6908 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:42:09.580893    6908 out.go:97] Downloading VM boot image ...
	I1008 10:42:09.580921    6908 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso
	I1008 10:42:31.422033    6908 out.go:97] Starting "download-only-430000" primary control-plane node in "download-only-430000" cluster
	I1008 10:42:31.422053    6908 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1008 10:42:31.719337    6908 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1008 10:42:31.719381    6908 cache.go:56] Caching tarball of preloaded images
	I1008 10:42:31.720257    6908 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1008 10:42:31.725247    6908 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1008 10:42:31.725286    6908 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1008 10:42:32.302841    6908 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1008 10:42:49.961301    6908 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1008 10:42:49.961487    6908 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1008 10:42:50.654301    6908 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1008 10:42:50.654517    6908 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/download-only-430000/config.json ...
	I1008 10:42:50.654535    6908 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/download-only-430000/config.json: {Name:mkb474b260b53c66663610ddbd7d258188150971 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:42:50.654777    6908 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1008 10:42:50.655001    6908 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1008 10:42:51.244836    6908 out.go:193] 
	W1008 10:42:51.249974    6908 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1093fcfa0 0x1093fcfa0 0x1093fcfa0 0x1093fcfa0 0x1093fcfa0 0x1093fcfa0 0x1093fcfa0] Decompressors:map[bz2:0x1400078ab20 gz:0x1400078ab28 tar:0x1400078aad0 tar.bz2:0x1400078aae0 tar.gz:0x1400078aaf0 tar.xz:0x1400078ab00 tar.zst:0x1400078ab10 tbz2:0x1400078aae0 tgz:0x1400078aaf0 txz:0x1400078ab00 tzst:0x1400078ab10 xz:0x1400078ab40 zip:0x1400078ab60 zst:0x1400078ab48] Getters:map[file:0x140001857a0 http:0x140000dae60 https:0x140000dafa0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1008 10:42:51.250009    6908 out_reason.go:110] 
	W1008 10:42:51.257782    6908 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:42:51.260773    6908 out.go:193] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-430000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (41.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.33s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-841000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-841000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (10.166354625s)

                                                
                                                
-- stdout --
	* [offline-docker-841000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-841000" primary control-plane node in "offline-docker-841000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-841000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:54:26.395502    8327 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:54:26.395680    8327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:54:26.395687    8327 out.go:358] Setting ErrFile to fd 2...
	I1008 10:54:26.395689    8327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:54:26.395837    8327 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:54:26.397211    8327 out.go:352] Setting JSON to false
	I1008 10:54:26.416923    8327 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5036,"bootTime":1728405030,"procs":566,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:54:26.417039    8327 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:54:26.421596    8327 out.go:177] * [offline-docker-841000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:54:26.429536    8327 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:54:26.429554    8327 notify.go:220] Checking for updates...
	I1008 10:54:26.436610    8327 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:54:26.437739    8327 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:54:26.440576    8327 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:54:26.443603    8327 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:54:26.446618    8327 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:54:26.450009    8327 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:54:26.450080    8327 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:54:26.453628    8327 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 10:54:26.460587    8327 start.go:297] selected driver: qemu2
	I1008 10:54:26.460598    8327 start.go:901] validating driver "qemu2" against <nil>
	I1008 10:54:26.460605    8327 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:54:26.462891    8327 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 10:54:26.465634    8327 out.go:177] * Automatically selected the socket_vmnet network
	I1008 10:54:26.468697    8327 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 10:54:26.468717    8327 cni.go:84] Creating CNI manager for ""
	I1008 10:54:26.468739    8327 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 10:54:26.468743    8327 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 10:54:26.468773    8327 start.go:340] cluster config:
	{Name:offline-docker-841000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:54:26.473280    8327 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:54:26.477612    8327 out.go:177] * Starting "offline-docker-841000" primary control-plane node in "offline-docker-841000" cluster
	I1008 10:54:26.485634    8327 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 10:54:26.485673    8327 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 10:54:26.485686    8327 cache.go:56] Caching tarball of preloaded images
	I1008 10:54:26.485791    8327 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 10:54:26.485796    8327 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 10:54:26.485857    8327 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/offline-docker-841000/config.json ...
	I1008 10:54:26.485872    8327 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/offline-docker-841000/config.json: {Name:mk6e64e79909884c8879307630db987717efc392 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:54:26.486173    8327 start.go:360] acquireMachinesLock for offline-docker-841000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:54:26.486216    8327 start.go:364] duration metric: took 37.042µs to acquireMachinesLock for "offline-docker-841000"
	I1008 10:54:26.486229    8327 start.go:93] Provisioning new machine with config: &{Name:offline-docker-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 10:54:26.486259    8327 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 10:54:26.489662    8327 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1008 10:54:26.505105    8327 start.go:159] libmachine.API.Create for "offline-docker-841000" (driver="qemu2")
	I1008 10:54:26.505139    8327 client.go:168] LocalClient.Create starting
	I1008 10:54:26.505227    8327 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 10:54:26.505265    8327 main.go:141] libmachine: Decoding PEM data...
	I1008 10:54:26.505274    8327 main.go:141] libmachine: Parsing certificate...
	I1008 10:54:26.505326    8327 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 10:54:26.505355    8327 main.go:141] libmachine: Decoding PEM data...
	I1008 10:54:26.505364    8327 main.go:141] libmachine: Parsing certificate...
	I1008 10:54:26.505734    8327 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 10:54:26.654747    8327 main.go:141] libmachine: Creating SSH key...
	I1008 10:54:27.010027    8327 main.go:141] libmachine: Creating Disk image...
	I1008 10:54:27.010039    8327 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 10:54:27.010301    8327 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/disk.qcow2
	I1008 10:54:27.020569    8327 main.go:141] libmachine: STDOUT: 
	I1008 10:54:27.020693    8327 main.go:141] libmachine: STDERR: 
	I1008 10:54:27.020776    8327 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/disk.qcow2 +20000M
	I1008 10:54:27.030334    8327 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 10:54:27.030357    8327 main.go:141] libmachine: STDERR: 
	I1008 10:54:27.030378    8327 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/disk.qcow2
	I1008 10:54:27.030387    8327 main.go:141] libmachine: Starting QEMU VM...
	I1008 10:54:27.030401    8327 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:54:27.030427    8327 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:80:7f:3e:43:98 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/disk.qcow2
	I1008 10:54:27.032542    8327 main.go:141] libmachine: STDOUT: 
	I1008 10:54:27.032664    8327 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:54:27.032693    8327 client.go:171] duration metric: took 527.546625ms to LocalClient.Create
	I1008 10:54:29.034839    8327 start.go:128] duration metric: took 2.548561s to createHost
	I1008 10:54:29.034896    8327 start.go:83] releasing machines lock for "offline-docker-841000", held for 2.548680584s
	W1008 10:54:29.034914    8327 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:54:29.051200    8327 out.go:177] * Deleting "offline-docker-841000" in qemu2 ...
	W1008 10:54:29.063329    8327 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:54:29.063342    8327 start.go:729] Will try again in 5 seconds ...
	I1008 10:54:34.065544    8327 start.go:360] acquireMachinesLock for offline-docker-841000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:54:34.066153    8327 start.go:364] duration metric: took 413.417µs to acquireMachinesLock for "offline-docker-841000"
	I1008 10:54:34.066290    8327 start.go:93] Provisioning new machine with config: &{Name:offline-docker-841000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:offline-docker-841000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 10:54:34.066555    8327 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 10:54:34.081974    8327 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1008 10:54:34.132200    8327 start.go:159] libmachine.API.Create for "offline-docker-841000" (driver="qemu2")
	I1008 10:54:34.132259    8327 client.go:168] LocalClient.Create starting
	I1008 10:54:34.132435    8327 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 10:54:34.132516    8327 main.go:141] libmachine: Decoding PEM data...
	I1008 10:54:34.132535    8327 main.go:141] libmachine: Parsing certificate...
	I1008 10:54:34.132608    8327 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 10:54:34.132674    8327 main.go:141] libmachine: Decoding PEM data...
	I1008 10:54:34.132686    8327 main.go:141] libmachine: Parsing certificate...
	I1008 10:54:34.133403    8327 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 10:54:34.292018    8327 main.go:141] libmachine: Creating SSH key...
	I1008 10:54:34.463769    8327 main.go:141] libmachine: Creating Disk image...
	I1008 10:54:34.463785    8327 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 10:54:34.464046    8327 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/disk.qcow2
	I1008 10:54:34.474415    8327 main.go:141] libmachine: STDOUT: 
	I1008 10:54:34.474431    8327 main.go:141] libmachine: STDERR: 
	I1008 10:54:34.474489    8327 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/disk.qcow2 +20000M
	I1008 10:54:34.482932    8327 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 10:54:34.482967    8327 main.go:141] libmachine: STDERR: 
	I1008 10:54:34.482986    8327 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/disk.qcow2
	I1008 10:54:34.482990    8327 main.go:141] libmachine: Starting QEMU VM...
	I1008 10:54:34.482996    8327 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:54:34.483033    8327 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/qemu.pid -device virtio-net-pci,netdev=net0,mac=62:dc:d5:7e:be:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/offline-docker-841000/disk.qcow2
	I1008 10:54:34.484871    8327 main.go:141] libmachine: STDOUT: 
	I1008 10:54:34.484884    8327 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:54:34.484898    8327 client.go:171] duration metric: took 352.635292ms to LocalClient.Create
	I1008 10:54:36.486333    8327 start.go:128] duration metric: took 2.41963325s to createHost
	I1008 10:54:36.486408    8327 start.go:83] releasing machines lock for "offline-docker-841000", held for 2.420237291s
	W1008 10:54:36.486838    8327 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-841000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-841000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:54:36.501331    8327 out.go:201] 
	W1008 10:54:36.506386    8327 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:54:36.506417    8327 out.go:270] * 
	* 
	W1008 10:54:36.508505    8327 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:54:36.515269    8327 out.go:201] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-841000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:629: *** TestOffline FAILED at 2024-10-08 10:54:36.527597 -0700 PDT m=+747.093930209
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-841000 -n offline-docker-841000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-841000 -n offline-docker-841000: exit status 7 (58.067417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-841000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-841000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-841000
--- FAIL: TestOffline (10.33s)

                                                
                                    
x
+
TestAddons/Setup (10.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-147000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-147000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: exit status 80 (10.151642959s)

                                                
                                                
-- stdout --
	* [addons-147000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-147000" primary control-plane node in "addons-147000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-147000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:43:11.227103    6992 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:43:11.227292    6992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:43:11.227295    6992 out.go:358] Setting ErrFile to fd 2...
	I1008 10:43:11.227297    6992 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:43:11.227412    6992 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:43:11.228545    6992 out.go:352] Setting JSON to false
	I1008 10:43:11.245923    6992 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4361,"bootTime":1728405030,"procs":553,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:43:11.245986    6992 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:43:11.249734    6992 out.go:177] * [addons-147000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:43:11.255750    6992 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:43:11.255820    6992 notify.go:220] Checking for updates...
	I1008 10:43:11.262741    6992 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:43:11.265647    6992 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:43:11.268733    6992 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:43:11.271748    6992 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:43:11.273016    6992 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:43:11.275860    6992 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:43:11.279705    6992 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 10:43:11.284722    6992 start.go:297] selected driver: qemu2
	I1008 10:43:11.284728    6992 start.go:901] validating driver "qemu2" against <nil>
	I1008 10:43:11.284733    6992 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:43:11.286999    6992 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 10:43:11.289723    6992 out.go:177] * Automatically selected the socket_vmnet network
	I1008 10:43:11.292797    6992 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 10:43:11.292810    6992 cni.go:84] Creating CNI manager for ""
	I1008 10:43:11.292830    6992 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 10:43:11.292840    6992 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 10:43:11.292876    6992 start.go:340] cluster config:
	{Name:addons-147000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-147000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:43:11.297398    6992 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:43:11.304734    6992 out.go:177] * Starting "addons-147000" primary control-plane node in "addons-147000" cluster
	I1008 10:43:11.308618    6992 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 10:43:11.308633    6992 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 10:43:11.308640    6992 cache.go:56] Caching tarball of preloaded images
	I1008 10:43:11.308719    6992 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 10:43:11.308724    6992 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 10:43:11.308949    6992 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/addons-147000/config.json ...
	I1008 10:43:11.308961    6992 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/addons-147000/config.json: {Name:mk8a0fa9419d3fb926b68975c36e9dcff5dbbe29 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:43:11.309313    6992 start.go:360] acquireMachinesLock for addons-147000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:43:11.309409    6992 start.go:364] duration metric: took 89.75µs to acquireMachinesLock for "addons-147000"
	I1008 10:43:11.309419    6992 start.go:93] Provisioning new machine with config: &{Name:addons-147000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-147000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 10:43:11.309450    6992 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 10:43:11.313733    6992 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1008 10:43:11.331094    6992 start.go:159] libmachine.API.Create for "addons-147000" (driver="qemu2")
	I1008 10:43:11.331123    6992 client.go:168] LocalClient.Create starting
	I1008 10:43:11.331271    6992 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 10:43:11.590260    6992 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 10:43:11.633288    6992 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 10:43:11.773990    6992 main.go:141] libmachine: Creating SSH key...
	I1008 10:43:11.955830    6992 main.go:141] libmachine: Creating Disk image...
	I1008 10:43:11.955842    6992 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 10:43:11.956106    6992 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/disk.qcow2
	I1008 10:43:11.966290    6992 main.go:141] libmachine: STDOUT: 
	I1008 10:43:11.966308    6992 main.go:141] libmachine: STDERR: 
	I1008 10:43:11.966361    6992 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/disk.qcow2 +20000M
	I1008 10:43:11.974684    6992 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 10:43:11.974707    6992 main.go:141] libmachine: STDERR: 
	I1008 10:43:11.974725    6992 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/disk.qcow2
	I1008 10:43:11.974730    6992 main.go:141] libmachine: Starting QEMU VM...
	I1008 10:43:11.974770    6992 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:43:11.974807    6992 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:eb:47:fd:97:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/disk.qcow2
	I1008 10:43:11.976522    6992 main.go:141] libmachine: STDOUT: 
	I1008 10:43:11.976537    6992 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:43:11.976565    6992 client.go:171] duration metric: took 645.427875ms to LocalClient.Create
	I1008 10:43:13.978754    6992 start.go:128] duration metric: took 2.669289209s to createHost
	I1008 10:43:13.978823    6992 start.go:83] releasing machines lock for "addons-147000", held for 2.669410916s
	W1008 10:43:13.978880    6992 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:43:13.988213    6992 out.go:177] * Deleting "addons-147000" in qemu2 ...
	W1008 10:43:14.013822    6992 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:43:14.013853    6992 start.go:729] Will try again in 5 seconds ...
	I1008 10:43:19.016177    6992 start.go:360] acquireMachinesLock for addons-147000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:43:19.016771    6992 start.go:364] duration metric: took 498µs to acquireMachinesLock for "addons-147000"
	I1008 10:43:19.016887    6992 start.go:93] Provisioning new machine with config: &{Name:addons-147000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:addons-147000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 10:43:19.017150    6992 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 10:43:19.032347    6992 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1008 10:43:19.082570    6992 start.go:159] libmachine.API.Create for "addons-147000" (driver="qemu2")
	I1008 10:43:19.082608    6992 client.go:168] LocalClient.Create starting
	I1008 10:43:19.082724    6992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 10:43:19.082797    6992 main.go:141] libmachine: Decoding PEM data...
	I1008 10:43:19.082812    6992 main.go:141] libmachine: Parsing certificate...
	I1008 10:43:19.082912    6992 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 10:43:19.082978    6992 main.go:141] libmachine: Decoding PEM data...
	I1008 10:43:19.082990    6992 main.go:141] libmachine: Parsing certificate...
	I1008 10:43:19.083775    6992 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 10:43:19.236823    6992 main.go:141] libmachine: Creating SSH key...
	I1008 10:43:19.285474    6992 main.go:141] libmachine: Creating Disk image...
	I1008 10:43:19.285480    6992 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 10:43:19.285664    6992 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/disk.qcow2
	I1008 10:43:19.295265    6992 main.go:141] libmachine: STDOUT: 
	I1008 10:43:19.295284    6992 main.go:141] libmachine: STDERR: 
	I1008 10:43:19.295338    6992 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/disk.qcow2 +20000M
	I1008 10:43:19.303621    6992 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 10:43:19.303640    6992 main.go:141] libmachine: STDERR: 
	I1008 10:43:19.303652    6992 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/disk.qcow2
	I1008 10:43:19.303656    6992 main.go:141] libmachine: Starting QEMU VM...
	I1008 10:43:19.303664    6992 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:43:19.303700    6992 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/qemu.pid -device virtio-net-pci,netdev=net0,mac=8e:3e:19:b0:2b:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/addons-147000/disk.qcow2
	I1008 10:43:19.305404    6992 main.go:141] libmachine: STDOUT: 
	I1008 10:43:19.305418    6992 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:43:19.305430    6992 client.go:171] duration metric: took 222.816458ms to LocalClient.Create
	I1008 10:43:21.307638    6992 start.go:128] duration metric: took 2.290433208s to createHost
	I1008 10:43:21.307711    6992 start.go:83] releasing machines lock for "addons-147000", held for 2.290920167s
	W1008 10:43:21.308080    6992 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p addons-147000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-147000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:43:21.315896    6992 out.go:201] 
	W1008 10:43:21.323019    6992 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:43:21.323064    6992 out.go:270] * 
	* 
	W1008 10:43:21.325680    6992 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:43:21.335863    6992 out.go:201] 

                                                
                                                
** /stderr **
addons_test.go:109: out/minikube-darwin-arm64 start -p addons-147000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=qemu2  --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher failed: exit status 80
--- FAIL: TestAddons/Setup (10.15s)

                                                
                                    
x
+
TestCertOptions (10.16s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-691000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-691000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.880183458s)

                                                
                                                
-- stdout --
	* [cert-options-691000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-691000" primary control-plane node in "cert-options-691000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-691000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-691000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-691000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-691000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-691000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (85.313333ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-691000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-691000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-691000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-691000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-691000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-691000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (43.092709ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-691000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-691000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-691000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-691000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-691000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-10-08 11:06:27.336711 -0700 PDT m=+1457.961837126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-691000 -n cert-options-691000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-691000 -n cert-options-691000: exit status 7 (34.402125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-691000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-691000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-691000
--- FAIL: TestCertOptions (10.16s)

                                                
                                    
x
+
TestCertExpiration (197.44s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-831000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-831000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (11.642481709s)

                                                
                                                
-- stdout --
	* [cert-expiration-831000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-831000" primary control-plane node in "cert-expiration-831000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-831000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-831000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-831000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-831000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-831000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.640711667s)

                                                
                                                
-- stdout --
	* [cert-expiration-831000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-831000" primary control-plane node in "cert-expiration-831000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-831000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-831000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-831000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-831000" primary control-plane node in "cert-expiration-831000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-831000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-831000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-10-08 11:09:17.631376 -0700 PDT m=+1628.259458126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-831000 -n cert-expiration-831000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-831000 -n cert-expiration-831000: exit status 7 (68.747958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-831000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-831000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-831000
--- FAIL: TestCertExpiration (197.44s)

                                                
                                    
x
+
TestDockerFlags (10.09s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-723000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-723000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.845216167s)

                                                
                                                
-- stdout --
	* [docker-flags-723000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-723000" primary control-plane node in "docker-flags-723000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-723000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:06:07.240989    9158 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:06:07.241138    9158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:06:07.241141    9158 out.go:358] Setting ErrFile to fd 2...
	I1008 11:06:07.241143    9158 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:06:07.241290    9158 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:06:07.242433    9158 out.go:352] Setting JSON to false
	I1008 11:06:07.260356    9158 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5737,"bootTime":1728405030,"procs":564,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:06:07.260428    9158 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:06:07.264859    9158 out.go:177] * [docker-flags-723000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:06:07.272765    9158 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:06:07.272824    9158 notify.go:220] Checking for updates...
	I1008 11:06:07.279743    9158 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:06:07.282736    9158 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:06:07.285720    9158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:06:07.288681    9158 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:06:07.291701    9158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:06:07.295157    9158 config.go:182] Loaded profile config "cert-expiration-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:06:07.295236    9158 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:06:07.295285    9158 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:06:07.299658    9158 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:06:07.311724    9158 start.go:297] selected driver: qemu2
	I1008 11:06:07.311731    9158 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:06:07.311741    9158 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:06:07.314270    9158 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:06:07.317699    9158 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:06:07.320802    9158 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I1008 11:06:07.320832    9158 cni.go:84] Creating CNI manager for ""
	I1008 11:06:07.320868    9158 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:06:07.320877    9158 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 11:06:07.320917    9158 start.go:340] cluster config:
	{Name:docker-flags-723000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-723000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:06:07.325985    9158 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:06:07.333683    9158 out.go:177] * Starting "docker-flags-723000" primary control-plane node in "docker-flags-723000" cluster
	I1008 11:06:07.337726    9158 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:06:07.337745    9158 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:06:07.337760    9158 cache.go:56] Caching tarball of preloaded images
	I1008 11:06:07.337856    9158 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:06:07.337863    9158 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:06:07.337932    9158 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/docker-flags-723000/config.json ...
	I1008 11:06:07.337946    9158 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/docker-flags-723000/config.json: {Name:mkce947d7fd0082c005a3b189a920666b674afe7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:06:07.338327    9158 start.go:360] acquireMachinesLock for docker-flags-723000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:06:07.338380    9158 start.go:364] duration metric: took 46.167µs to acquireMachinesLock for "docker-flags-723000"
	I1008 11:06:07.338391    9158 start.go:93] Provisioning new machine with config: &{Name:docker-flags-723000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-723000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:06:07.338425    9158 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:06:07.345743    9158 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1008 11:06:07.363824    9158 start.go:159] libmachine.API.Create for "docker-flags-723000" (driver="qemu2")
	I1008 11:06:07.363853    9158 client.go:168] LocalClient.Create starting
	I1008 11:06:07.363922    9158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:06:07.363962    9158 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:07.363973    9158 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:07.364020    9158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:06:07.364050    9158 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:07.364058    9158 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:07.364541    9158 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:06:07.508868    9158 main.go:141] libmachine: Creating SSH key...
	I1008 11:06:07.609268    9158 main.go:141] libmachine: Creating Disk image...
	I1008 11:06:07.609275    9158 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:06:07.609474    9158 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/disk.qcow2
	I1008 11:06:07.619400    9158 main.go:141] libmachine: STDOUT: 
	I1008 11:06:07.619417    9158 main.go:141] libmachine: STDERR: 
	I1008 11:06:07.619479    9158 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/disk.qcow2 +20000M
	I1008 11:06:07.628015    9158 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:06:07.628030    9158 main.go:141] libmachine: STDERR: 
	I1008 11:06:07.628045    9158 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/disk.qcow2
	I1008 11:06:07.628049    9158 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:06:07.628060    9158 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:06:07.628096    9158 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:b7:53:31:2a:12 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/disk.qcow2
	I1008 11:06:07.629958    9158 main.go:141] libmachine: STDOUT: 
	I1008 11:06:07.629979    9158 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:06:07.629999    9158 client.go:171] duration metric: took 266.144417ms to LocalClient.Create
	I1008 11:06:09.632265    9158 start.go:128] duration metric: took 2.293844167s to createHost
	I1008 11:06:09.632353    9158 start.go:83] releasing machines lock for "docker-flags-723000", held for 2.294002459s
	W1008 11:06:09.632479    9158 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:06:09.657783    9158 out.go:177] * Deleting "docker-flags-723000" in qemu2 ...
	W1008 11:06:09.678301    9158 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:06:09.678323    9158 start.go:729] Will try again in 5 seconds ...
	I1008 11:06:14.680514    9158 start.go:360] acquireMachinesLock for docker-flags-723000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:06:14.681074    9158 start.go:364] duration metric: took 438.75µs to acquireMachinesLock for "docker-flags-723000"
	I1008 11:06:14.681201    9158 start.go:93] Provisioning new machine with config: &{Name:docker-flags-723000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:docker-flags-723000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:06:14.681477    9158 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:06:14.690353    9158 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1008 11:06:14.737269    9158 start.go:159] libmachine.API.Create for "docker-flags-723000" (driver="qemu2")
	I1008 11:06:14.737311    9158 client.go:168] LocalClient.Create starting
	I1008 11:06:14.737437    9158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:06:14.737514    9158 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:14.737532    9158 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:14.737615    9158 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:06:14.737673    9158 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:14.737693    9158 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:14.738340    9158 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:06:14.887876    9158 main.go:141] libmachine: Creating SSH key...
	I1008 11:06:14.986946    9158 main.go:141] libmachine: Creating Disk image...
	I1008 11:06:14.986952    9158 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:06:14.987129    9158 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/disk.qcow2
	I1008 11:06:14.996641    9158 main.go:141] libmachine: STDOUT: 
	I1008 11:06:14.996666    9158 main.go:141] libmachine: STDERR: 
	I1008 11:06:14.996725    9158 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/disk.qcow2 +20000M
	I1008 11:06:15.005039    9158 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:06:15.005053    9158 main.go:141] libmachine: STDERR: 
	I1008 11:06:15.005070    9158 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/disk.qcow2
	I1008 11:06:15.005076    9158 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:06:15.005086    9158 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:06:15.005118    9158 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:ad:7e:26:26:a2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/docker-flags-723000/disk.qcow2
	I1008 11:06:15.006845    9158 main.go:141] libmachine: STDOUT: 
	I1008 11:06:15.006859    9158 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:06:15.006872    9158 client.go:171] duration metric: took 269.559167ms to LocalClient.Create
	I1008 11:06:17.009079    9158 start.go:128] duration metric: took 2.327553542s to createHost
	I1008 11:06:17.009137    9158 start.go:83] releasing machines lock for "docker-flags-723000", held for 2.328074459s
	W1008 11:06:17.009562    9158 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-723000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-723000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:06:17.023132    9158 out.go:201] 
	W1008 11:06:17.027299    9158 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:06:17.027352    9158 out.go:270] * 
	* 
	W1008 11:06:17.029903    9158 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:06:17.040274    9158 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-723000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-723000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-723000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (83.795792ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-723000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-723000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-723000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-723000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-723000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-723000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-723000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-723000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-723000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (46.305916ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-723000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-723000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-723000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-723000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-723000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-723000\"\n"
panic.go:629: *** TestDockerFlags FAILED at 2024-10-08 11:06:17.186121 -0700 PDT m=+1447.811071251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-723000 -n docker-flags-723000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-723000 -n docker-flags-723000: exit status 7 (33.911542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-723000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-723000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-723000
--- FAIL: TestDockerFlags (10.09s)

                                                
                                    
x
+
TestForceSystemdFlag (10.05s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-667000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-667000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.852854084s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-667000" primary control-plane node in "force-systemd-flag-667000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-667000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:05:33.381916    9007 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:05:33.382066    9007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:05:33.382070    9007 out.go:358] Setting ErrFile to fd 2...
	I1008 11:05:33.382072    9007 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:05:33.382198    9007 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:05:33.383336    9007 out.go:352] Setting JSON to false
	I1008 11:05:33.401311    9007 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5703,"bootTime":1728405030,"procs":567,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:05:33.401387    9007 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:05:33.407652    9007 out.go:177] * [force-systemd-flag-667000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:05:33.414624    9007 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:05:33.414659    9007 notify.go:220] Checking for updates...
	I1008 11:05:33.421618    9007 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:05:33.424651    9007 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:05:33.427604    9007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:05:33.430642    9007 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:05:33.433609    9007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:05:33.436924    9007 config.go:182] Loaded profile config "NoKubernetes-490000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:05:33.437000    9007 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:05:33.437055    9007 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:05:33.441619    9007 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:05:33.448627    9007 start.go:297] selected driver: qemu2
	I1008 11:05:33.448635    9007 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:05:33.448642    9007 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:05:33.451177    9007 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:05:33.454637    9007 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:05:33.457802    9007 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 11:05:33.457816    9007 cni.go:84] Creating CNI manager for ""
	I1008 11:05:33.457841    9007 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:05:33.457852    9007 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 11:05:33.457892    9007 start.go:340] cluster config:
	{Name:force-systemd-flag-667000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:05:33.462806    9007 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:05:33.470614    9007 out.go:177] * Starting "force-systemd-flag-667000" primary control-plane node in "force-systemd-flag-667000" cluster
	I1008 11:05:33.474579    9007 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:05:33.474594    9007 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:05:33.474606    9007 cache.go:56] Caching tarball of preloaded images
	I1008 11:05:33.474732    9007 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:05:33.474746    9007 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:05:33.474812    9007 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/force-systemd-flag-667000/config.json ...
	I1008 11:05:33.474832    9007 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/force-systemd-flag-667000/config.json: {Name:mk6727584e99a51c43bbc3fa999a311d83f4cd72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:05:33.475114    9007 start.go:360] acquireMachinesLock for force-systemd-flag-667000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:05:33.475170    9007 start.go:364] duration metric: took 44.916µs to acquireMachinesLock for "force-systemd-flag-667000"
	I1008 11:05:33.475181    9007 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:05:33.475225    9007 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:05:33.479464    9007 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1008 11:05:33.496374    9007 start.go:159] libmachine.API.Create for "force-systemd-flag-667000" (driver="qemu2")
	I1008 11:05:33.496401    9007 client.go:168] LocalClient.Create starting
	I1008 11:05:33.496473    9007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:05:33.496510    9007 main.go:141] libmachine: Decoding PEM data...
	I1008 11:05:33.496520    9007 main.go:141] libmachine: Parsing certificate...
	I1008 11:05:33.496565    9007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:05:33.496594    9007 main.go:141] libmachine: Decoding PEM data...
	I1008 11:05:33.496607    9007 main.go:141] libmachine: Parsing certificate...
	I1008 11:05:33.496992    9007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:05:33.652040    9007 main.go:141] libmachine: Creating SSH key...
	I1008 11:05:33.811052    9007 main.go:141] libmachine: Creating Disk image...
	I1008 11:05:33.811061    9007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:05:33.811345    9007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/disk.qcow2
	I1008 11:05:33.821594    9007 main.go:141] libmachine: STDOUT: 
	I1008 11:05:33.821612    9007 main.go:141] libmachine: STDERR: 
	I1008 11:05:33.821670    9007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/disk.qcow2 +20000M
	I1008 11:05:33.830129    9007 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:05:33.830151    9007 main.go:141] libmachine: STDERR: 
	I1008 11:05:33.830171    9007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/disk.qcow2
	I1008 11:05:33.830179    9007 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:05:33.830191    9007 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:05:33.830232    9007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:14:ec:11:ff:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/disk.qcow2
	I1008 11:05:33.832131    9007 main.go:141] libmachine: STDOUT: 
	I1008 11:05:33.832144    9007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:05:33.832165    9007 client.go:171] duration metric: took 335.764791ms to LocalClient.Create
	I1008 11:05:35.834317    9007 start.go:128] duration metric: took 2.359110708s to createHost
	I1008 11:05:35.834396    9007 start.go:83] releasing machines lock for "force-systemd-flag-667000", held for 2.359257208s
	W1008 11:05:35.834497    9007 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:05:35.845559    9007 out.go:177] * Deleting "force-systemd-flag-667000" in qemu2 ...
	W1008 11:05:35.877671    9007 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:05:35.877705    9007 start.go:729] Will try again in 5 seconds ...
	I1008 11:05:40.879833    9007 start.go:360] acquireMachinesLock for force-systemd-flag-667000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:05:40.880280    9007 start.go:364] duration metric: took 298.084µs to acquireMachinesLock for "force-systemd-flag-667000"
	I1008 11:05:40.880378    9007 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-667000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-flag-667000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:05:40.880661    9007 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:05:40.886071    9007 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1008 11:05:40.933766    9007 start.go:159] libmachine.API.Create for "force-systemd-flag-667000" (driver="qemu2")
	I1008 11:05:40.933817    9007 client.go:168] LocalClient.Create starting
	I1008 11:05:40.933928    9007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:05:40.933993    9007 main.go:141] libmachine: Decoding PEM data...
	I1008 11:05:40.934013    9007 main.go:141] libmachine: Parsing certificate...
	I1008 11:05:40.934133    9007 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:05:40.934166    9007 main.go:141] libmachine: Decoding PEM data...
	I1008 11:05:40.934181    9007 main.go:141] libmachine: Parsing certificate...
	I1008 11:05:40.934776    9007 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:05:41.097892    9007 main.go:141] libmachine: Creating SSH key...
	I1008 11:05:41.138489    9007 main.go:141] libmachine: Creating Disk image...
	I1008 11:05:41.138495    9007 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:05:41.138692    9007 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/disk.qcow2
	I1008 11:05:41.148607    9007 main.go:141] libmachine: STDOUT: 
	I1008 11:05:41.148630    9007 main.go:141] libmachine: STDERR: 
	I1008 11:05:41.148687    9007 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/disk.qcow2 +20000M
	I1008 11:05:41.157123    9007 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:05:41.157137    9007 main.go:141] libmachine: STDERR: 
	I1008 11:05:41.157147    9007 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/disk.qcow2
	I1008 11:05:41.157151    9007 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:05:41.157159    9007 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:05:41.157185    9007 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:41:80:49:cf:b2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-flag-667000/disk.qcow2
	I1008 11:05:41.158995    9007 main.go:141] libmachine: STDOUT: 
	I1008 11:05:41.159010    9007 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:05:41.159032    9007 client.go:171] duration metric: took 225.214333ms to LocalClient.Create
	I1008 11:05:43.161160    9007 start.go:128] duration metric: took 2.280513083s to createHost
	I1008 11:05:43.161218    9007 start.go:83] releasing machines lock for "force-systemd-flag-667000", held for 2.280954208s
	W1008 11:05:43.161563    9007 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-667000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:05:43.171780    9007 out.go:201] 
	W1008 11:05:43.176286    9007 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:05:43.176399    9007 out.go:270] * 
	* 
	W1008 11:05:43.179437    9007 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:05:43.192281    9007 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-667000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-667000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-667000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.577125ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-667000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-667000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-667000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-10-08 11:05:43.287315 -0700 PDT m=+1413.911676668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-667000 -n force-systemd-flag-667000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-667000 -n force-systemd-flag-667000: exit status 7 (35.794708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-667000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-667000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-667000
--- FAIL: TestForceSystemdFlag (10.05s)

                                                
                                    
x
+
TestForceSystemdEnv (10.13s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-898000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-898000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.930897625s)

                                                
                                                
-- stdout --
	* [force-systemd-env-898000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-898000" primary control-plane node in "force-systemd-env-898000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-898000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:05:57.110485    9119 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:05:57.110647    9119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:05:57.110650    9119 out.go:358] Setting ErrFile to fd 2...
	I1008 11:05:57.110652    9119 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:05:57.110770    9119 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:05:57.111953    9119 out.go:352] Setting JSON to false
	I1008 11:05:57.129730    9119 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5727,"bootTime":1728405030,"procs":564,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:05:57.129802    9119 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:05:57.135058    9119 out.go:177] * [force-systemd-env-898000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:05:57.142067    9119 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:05:57.142118    9119 notify.go:220] Checking for updates...
	I1008 11:05:57.150028    9119 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:05:57.153942    9119 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:05:57.157032    9119 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:05:57.160055    9119 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:05:57.163018    9119 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I1008 11:05:57.166386    9119 config.go:182] Loaded profile config "NoKubernetes-490000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v0.0.0
	I1008 11:05:57.166467    9119 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:05:57.166516    9119 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:05:57.171033    9119 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:05:57.179023    9119 start.go:297] selected driver: qemu2
	I1008 11:05:57.179031    9119 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:05:57.179037    9119 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:05:57.181511    9119 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:05:57.183010    9119 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:05:57.186054    9119 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 11:05:57.186067    9119 cni.go:84] Creating CNI manager for ""
	I1008 11:05:57.186091    9119 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:05:57.186103    9119 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 11:05:57.186148    9119 start.go:340] cluster config:
	{Name:force-systemd-env-898000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-898000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:05:57.190965    9119 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:05:57.206624    9119 out.go:177] * Starting "force-systemd-env-898000" primary control-plane node in "force-systemd-env-898000" cluster
	I1008 11:05:57.211081    9119 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:05:57.211099    9119 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:05:57.211112    9119 cache.go:56] Caching tarball of preloaded images
	I1008 11:05:57.211217    9119 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:05:57.211223    9119 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:05:57.211303    9119 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/force-systemd-env-898000/config.json ...
	I1008 11:05:57.211314    9119 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/force-systemd-env-898000/config.json: {Name:mk38979f5146bf8d80a54c4e60715828af7f4106 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:05:57.211723    9119 start.go:360] acquireMachinesLock for force-systemd-env-898000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:05:57.211782    9119 start.go:364] duration metric: took 50.083µs to acquireMachinesLock for "force-systemd-env-898000"
	I1008 11:05:57.211794    9119 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-898000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-898000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:05:57.211831    9119 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:05:57.220010    9119 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1008 11:05:57.239147    9119 start.go:159] libmachine.API.Create for "force-systemd-env-898000" (driver="qemu2")
	I1008 11:05:57.239177    9119 client.go:168] LocalClient.Create starting
	I1008 11:05:57.239261    9119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:05:57.239303    9119 main.go:141] libmachine: Decoding PEM data...
	I1008 11:05:57.239314    9119 main.go:141] libmachine: Parsing certificate...
	I1008 11:05:57.239358    9119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:05:57.239391    9119 main.go:141] libmachine: Decoding PEM data...
	I1008 11:05:57.239404    9119 main.go:141] libmachine: Parsing certificate...
	I1008 11:05:57.239812    9119 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:05:57.385284    9119 main.go:141] libmachine: Creating SSH key...
	I1008 11:05:57.553063    9119 main.go:141] libmachine: Creating Disk image...
	I1008 11:05:57.553072    9119 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:05:57.553307    9119 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/disk.qcow2
	I1008 11:05:57.563551    9119 main.go:141] libmachine: STDOUT: 
	I1008 11:05:57.563566    9119 main.go:141] libmachine: STDERR: 
	I1008 11:05:57.563622    9119 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/disk.qcow2 +20000M
	I1008 11:05:57.572140    9119 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:05:57.572158    9119 main.go:141] libmachine: STDERR: 
	I1008 11:05:57.572176    9119 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/disk.qcow2
	I1008 11:05:57.572182    9119 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:05:57.572193    9119 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:05:57.572217    9119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:65:22:b5:36:d6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/disk.qcow2
	I1008 11:05:57.574048    9119 main.go:141] libmachine: STDOUT: 
	I1008 11:05:57.574063    9119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:05:57.574083    9119 client.go:171] duration metric: took 334.905416ms to LocalClient.Create
	I1008 11:05:59.575983    9119 start.go:128] duration metric: took 2.36416075s to createHost
	I1008 11:05:59.576055    9119 start.go:83] releasing machines lock for "force-systemd-env-898000", held for 2.364303625s
	W1008 11:05:59.576185    9119 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:05:59.586075    9119 out.go:177] * Deleting "force-systemd-env-898000" in qemu2 ...
	W1008 11:05:59.611882    9119 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:05:59.611905    9119 start.go:729] Will try again in 5 seconds ...
	I1008 11:06:04.614158    9119 start.go:360] acquireMachinesLock for force-systemd-env-898000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:06:04.614775    9119 start.go:364] duration metric: took 497.125µs to acquireMachinesLock for "force-systemd-env-898000"
	I1008 11:06:04.614921    9119 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-898000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.1 ClusterName:force-systemd-env-898000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:06:04.615225    9119 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:06:04.628787    9119 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I1008 11:06:04.678810    9119 start.go:159] libmachine.API.Create for "force-systemd-env-898000" (driver="qemu2")
	I1008 11:06:04.678885    9119 client.go:168] LocalClient.Create starting
	I1008 11:06:04.679082    9119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:06:04.679171    9119 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:04.679187    9119 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:04.679264    9119 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:06:04.679321    9119 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:04.679336    9119 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:04.679937    9119 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:06:04.834920    9119 main.go:141] libmachine: Creating SSH key...
	I1008 11:06:04.943960    9119 main.go:141] libmachine: Creating Disk image...
	I1008 11:06:04.943966    9119 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:06:04.944186    9119 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/disk.qcow2
	I1008 11:06:04.954255    9119 main.go:141] libmachine: STDOUT: 
	I1008 11:06:04.954274    9119 main.go:141] libmachine: STDERR: 
	I1008 11:06:04.954323    9119 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/disk.qcow2 +20000M
	I1008 11:06:04.962767    9119 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:06:04.962780    9119 main.go:141] libmachine: STDERR: 
	I1008 11:06:04.962796    9119 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/disk.qcow2
	I1008 11:06:04.962799    9119 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:06:04.962806    9119 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:06:04.962833    9119 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:f5:38:82:da:e6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/force-systemd-env-898000/disk.qcow2
	I1008 11:06:04.964590    9119 main.go:141] libmachine: STDOUT: 
	I1008 11:06:04.964603    9119 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:06:04.964617    9119 client.go:171] duration metric: took 285.715084ms to LocalClient.Create
	I1008 11:06:06.966760    9119 start.go:128] duration metric: took 2.351548666s to createHost
	I1008 11:06:06.966831    9119 start.go:83] releasing machines lock for "force-systemd-env-898000", held for 2.352071208s
	W1008 11:06:06.967208    9119 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-898000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-898000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:06:06.976686    9119 out.go:201] 
	W1008 11:06:06.981777    9119 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:06:06.981804    9119 out.go:270] * 
	* 
	W1008 11:06:06.984828    9119 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:06:06.993723    9119 out.go:201] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-898000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-898000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-898000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (81.782208ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-898000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-898000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-898000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-10-08 11:06:07.093668 -0700 PDT m=+1437.718443168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-898000 -n force-systemd-env-898000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-898000 -n force-systemd-env-898000: exit status 7 (36.8205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-898000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-898000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-898000
--- FAIL: TestForceSystemdEnv (10.13s)

                                                
                                    
x
+
TestErrorSpam/setup (9.98s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-757000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-757000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 --driver=qemu2 : exit status 80 (9.973843333s)

                                                
                                                
-- stdout --
	* [nospam-757000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-757000" primary control-plane node in "nospam-757000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-757000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-757000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-757000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-757000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-757000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=19774
- KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-757000" primary control-plane node in "nospam-757000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-757000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-757000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.98s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-099000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2234: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-099000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.816254416s)

                                                
                                                
-- stdout --
	* [functional-099000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-099000" primary control-plane node in "functional-099000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-099000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51058 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51058 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:51058 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2236: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-099000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2241: start stdout=* [functional-099000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
- MINIKUBE_LOCATION=19774
- KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-099000" primary control-plane node in "functional-099000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-099000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2246: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:51058 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:51058 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:51058 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-099000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (74.322792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.89s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1008 10:43:52.609989    6907 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-099000 --alsologtostderr -v=8
functional_test.go:659: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-099000 --alsologtostderr -v=8: exit status 80 (5.191704375s)

                                                
                                                
-- stdout --
	* [functional-099000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-099000" primary control-plane node in "functional-099000" cluster
	* Restarting existing qemu2 VM for "functional-099000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-099000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:43:52.643564    7160 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:43:52.643723    7160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:43:52.643726    7160 out.go:358] Setting ErrFile to fd 2...
	I1008 10:43:52.643729    7160 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:43:52.643850    7160 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:43:52.644946    7160 out.go:352] Setting JSON to false
	I1008 10:43:52.662781    7160 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4402,"bootTime":1728405030,"procs":555,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:43:52.662860    7160 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:43:52.667436    7160 out.go:177] * [functional-099000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:43:52.674440    7160 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:43:52.674500    7160 notify.go:220] Checking for updates...
	I1008 10:43:52.680415    7160 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:43:52.683460    7160 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:43:52.686385    7160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:43:52.689455    7160 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:43:52.692459    7160 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:43:52.695656    7160 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:43:52.695716    7160 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:43:52.700404    7160 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 10:43:52.707381    7160 start.go:297] selected driver: qemu2
	I1008 10:43:52.707388    7160 start.go:901] validating driver "qemu2" against &{Name:functional-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:43:52.707459    7160 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:43:52.709949    7160 cni.go:84] Creating CNI manager for ""
	I1008 10:43:52.709979    7160 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 10:43:52.710024    7160 start.go:340] cluster config:
	{Name:functional-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-099000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:43:52.714579    7160 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:43:52.721283    7160 out.go:177] * Starting "functional-099000" primary control-plane node in "functional-099000" cluster
	I1008 10:43:52.725446    7160 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 10:43:52.725470    7160 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 10:43:52.725480    7160 cache.go:56] Caching tarball of preloaded images
	I1008 10:43:52.725582    7160 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 10:43:52.725588    7160 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 10:43:52.725648    7160 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/functional-099000/config.json ...
	I1008 10:43:52.726120    7160 start.go:360] acquireMachinesLock for functional-099000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:43:52.726157    7160 start.go:364] duration metric: took 30.459µs to acquireMachinesLock for "functional-099000"
	I1008 10:43:52.726166    7160 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:43:52.726170    7160 fix.go:54] fixHost starting: 
	I1008 10:43:52.726307    7160 fix.go:112] recreateIfNeeded on functional-099000: state=Stopped err=<nil>
	W1008 10:43:52.726317    7160 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:43:52.734423    7160 out.go:177] * Restarting existing qemu2 VM for "functional-099000" ...
	I1008 10:43:52.738393    7160 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:43:52.738439    7160 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a1:78:71:0e:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/disk.qcow2
	I1008 10:43:52.740648    7160 main.go:141] libmachine: STDOUT: 
	I1008 10:43:52.740668    7160 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:43:52.740696    7160 fix.go:56] duration metric: took 14.524084ms for fixHost
	I1008 10:43:52.740700    7160 start.go:83] releasing machines lock for "functional-099000", held for 14.538875ms
	W1008 10:43:52.740707    7160 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:43:52.740744    7160 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:43:52.740749    7160 start.go:729] Will try again in 5 seconds ...
	I1008 10:43:57.742837    7160 start.go:360] acquireMachinesLock for functional-099000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:43:57.743237    7160 start.go:364] duration metric: took 280.583µs to acquireMachinesLock for "functional-099000"
	I1008 10:43:57.743374    7160 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:43:57.743394    7160 fix.go:54] fixHost starting: 
	I1008 10:43:57.744149    7160 fix.go:112] recreateIfNeeded on functional-099000: state=Stopped err=<nil>
	W1008 10:43:57.744174    7160 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:43:57.747684    7160 out.go:177] * Restarting existing qemu2 VM for "functional-099000" ...
	I1008 10:43:57.754504    7160 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:43:57.754708    7160 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a1:78:71:0e:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/disk.qcow2
	I1008 10:43:57.764793    7160 main.go:141] libmachine: STDOUT: 
	I1008 10:43:57.764874    7160 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:43:57.764944    7160 fix.go:56] duration metric: took 21.552083ms for fixHost
	I1008 10:43:57.764957    7160 start.go:83] releasing machines lock for "functional-099000", held for 21.676333ms
	W1008 10:43:57.765147    7160 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-099000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-099000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:43:57.773504    7160 out.go:201] 
	W1008 10:43:57.777582    7160 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:43:57.777616    7160 out.go:270] * 
	* 
	W1008 10:43:57.780248    7160 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:43:57.788523    7160 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:661: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-099000 --alsologtostderr -v=8": exit status 80
functional_test.go:663: soft start took 5.193361709s for "functional-099000" cluster.
I1008 10:43:57.803648    6907 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (74.096ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
functional_test.go:681: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (27.897958ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:683: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:687: expected current-context = "functional-099000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (33.92275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-099000 get po -A
functional_test.go:696: (dbg) Non-zero exit: kubectl --context functional-099000 get po -A: exit status 1 (26.97925ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-099000

                                                
                                                
** /stderr **
functional_test.go:698: failed to get kubectl pods: args "kubectl --context functional-099000 get po -A" : exit status 1
functional_test.go:702: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-099000\n"*: args "kubectl --context functional-099000 get po -A"
functional_test.go:705: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-099000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (33.768542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh sudo crictl images
functional_test.go:1124: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh sudo crictl images: exit status 83 (54.960833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1126: failed to get images by "out/minikube-darwin-arm64 -p functional-099000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1130: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (44.64375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1150: failed to manually delete image "out/minikube-darwin-arm64 -p functional-099000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1153: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (43.809667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1158: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1163: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (44.924917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1165: expected "out/minikube-darwin-arm64 -p functional-099000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.77s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 kubectl -- --context functional-099000 get pods
functional_test.go:716: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 kubectl -- --context functional-099000 get pods: exit status 1 (737.298625ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-099000
	* no server found for cluster "functional-099000"

                                                
                                                
** /stderr **
functional_test.go:719: failed to get pods. args "out/minikube-darwin-arm64 -p functional-099000 kubectl -- --context functional-099000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (35.510209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.77s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-099000 get pods
functional_test.go:741: (dbg) Non-zero exit: out/kubectl --context functional-099000 get pods: exit status 1 (1.212791542s)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-099000
	* no server found for cluster "functional-099000"

                                                
                                                
** /stderr **
functional_test.go:744: failed to run kubectl directly. args "out/kubectl --context functional-099000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (33.298666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.25s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.28s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-099000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-099000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.203930667s)

                                                
                                                
-- stdout --
	* [functional-099000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-099000" primary control-plane node in "functional-099000" cluster
	* Restarting existing qemu2 VM for "functional-099000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-099000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-099000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:759: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-099000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:761: restart took 5.204518583s for "functional-099000" cluster.
I1008 10:44:08.614542    6907 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (73.94075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.28s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-099000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:810: (dbg) Non-zero exit: kubectl --context functional-099000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (28.386875ms)

                                                
                                                
** stderr ** 
	error: context "functional-099000" does not exist

                                                
                                                
** /stderr **
functional_test.go:812: failed to get components. args "kubectl --context functional-099000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (34.79525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 logs
functional_test.go:1236: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 logs: exit status 83 (78.933625ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-430000 | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT |                     |
	|         | -p download-only-430000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT | 08 Oct 24 10:42 PDT |
	| delete  | -p download-only-430000                                                  | download-only-430000 | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT | 08 Oct 24 10:42 PDT |
	| start   | -o=json --download-only                                                  | download-only-500000 | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT |                     |
	|         | -p download-only-500000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
	| delete  | -p download-only-500000                                                  | download-only-500000 | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
	| delete  | -p download-only-430000                                                  | download-only-430000 | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
	| delete  | -p download-only-500000                                                  | download-only-500000 | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
	| start   | --download-only -p                                                       | binary-mirror-678000 | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | binary-mirror-678000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:51023                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-678000                                                  | binary-mirror-678000 | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
	| addons  | disable dashboard -p                                                     | addons-147000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | addons-147000                                                            |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                      | addons-147000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | addons-147000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-147000 --wait=true                                             | addons-147000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
	|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	| delete  | -p addons-147000                                                         | addons-147000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
	| start   | -p nospam-757000 -n=1 --memory=2250 --wait=false                         | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
	|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-757000                                                         | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
	| start   | -p functional-099000                                                     | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-099000                                                     | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-099000 cache add                                              | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-099000 cache add                                              | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-099000 cache add                                              | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-099000 cache add                                              | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
	|         | minikube-local-cache-test:functional-099000                              |                      |         |         |                     |                     |
	| cache   | functional-099000 cache delete                                           | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
	|         | minikube-local-cache-test:functional-099000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
	| ssh     | functional-099000 ssh sudo                                               | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-099000                                                        | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-099000 ssh                                                    | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-099000 cache reload                                           | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
	| ssh     | functional-099000 ssh                                                    | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-099000 kubectl --                                             | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
	|         | --context functional-099000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-099000                                                     | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 10:44:03
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 10:44:03.440070    7235 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:44:03.440246    7235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:44:03.440248    7235 out.go:358] Setting ErrFile to fd 2...
	I1008 10:44:03.440249    7235 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:44:03.441052    7235 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:44:03.442662    7235 out.go:352] Setting JSON to false
	I1008 10:44:03.461053    7235 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4413,"bootTime":1728405030,"procs":556,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:44:03.461137    7235 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:44:03.467877    7235 out.go:177] * [functional-099000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:44:03.476784    7235 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:44:03.476786    7235 notify.go:220] Checking for updates...
	I1008 10:44:03.485666    7235 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:44:03.489721    7235 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:44:03.492781    7235 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:44:03.495663    7235 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:44:03.498739    7235 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:44:03.502045    7235 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:44:03.502098    7235 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:44:03.506708    7235 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 10:44:03.513700    7235 start.go:297] selected driver: qemu2
	I1008 10:44:03.513704    7235 start.go:901] validating driver "qemu2" against &{Name:functional-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:44:03.513765    7235 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:44:03.516344    7235 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 10:44:03.516370    7235 cni.go:84] Creating CNI manager for ""
	I1008 10:44:03.516400    7235 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 10:44:03.516454    7235 start.go:340] cluster config:
	{Name:functional-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-099000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:44:03.521362    7235 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:44:03.528741    7235 out.go:177] * Starting "functional-099000" primary control-plane node in "functional-099000" cluster
	I1008 10:44:03.531680    7235 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 10:44:03.531700    7235 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 10:44:03.531710    7235 cache.go:56] Caching tarball of preloaded images
	I1008 10:44:03.531808    7235 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 10:44:03.531812    7235 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 10:44:03.531891    7235 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/functional-099000/config.json ...
	I1008 10:44:03.532289    7235 start.go:360] acquireMachinesLock for functional-099000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:44:03.532337    7235 start.go:364] duration metric: took 43.708µs to acquireMachinesLock for "functional-099000"
	I1008 10:44:03.532348    7235 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:44:03.532350    7235 fix.go:54] fixHost starting: 
	I1008 10:44:03.532469    7235 fix.go:112] recreateIfNeeded on functional-099000: state=Stopped err=<nil>
	W1008 10:44:03.532477    7235 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:44:03.540555    7235 out.go:177] * Restarting existing qemu2 VM for "functional-099000" ...
	I1008 10:44:03.544713    7235 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:44:03.544757    7235 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a1:78:71:0e:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/disk.qcow2
	I1008 10:44:03.547101    7235 main.go:141] libmachine: STDOUT: 
	I1008 10:44:03.547117    7235 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:44:03.547145    7235 fix.go:56] duration metric: took 14.792125ms for fixHost
	I1008 10:44:03.547149    7235 start.go:83] releasing machines lock for "functional-099000", held for 14.808833ms
	W1008 10:44:03.547154    7235 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:44:03.547195    7235 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:44:03.547199    7235 start.go:729] Will try again in 5 seconds ...
	I1008 10:44:08.549401    7235 start.go:360] acquireMachinesLock for functional-099000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:44:08.549891    7235 start.go:364] duration metric: took 419.083µs to acquireMachinesLock for "functional-099000"
	I1008 10:44:08.550028    7235 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:44:08.550045    7235 fix.go:54] fixHost starting: 
	I1008 10:44:08.550868    7235 fix.go:112] recreateIfNeeded on functional-099000: state=Stopped err=<nil>
	W1008 10:44:08.550890    7235 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:44:08.560269    7235 out.go:177] * Restarting existing qemu2 VM for "functional-099000" ...
	I1008 10:44:08.565359    7235 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:44:08.565624    7235 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a1:78:71:0e:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/disk.qcow2
	I1008 10:44:08.576803    7235 main.go:141] libmachine: STDOUT: 
	I1008 10:44:08.576850    7235 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:44:08.576946    7235 fix.go:56] duration metric: took 26.907167ms for fixHost
	I1008 10:44:08.576962    7235 start.go:83] releasing machines lock for "functional-099000", held for 27.049041ms
	W1008 10:44:08.577162    7235 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-099000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:44:08.584391    7235 out.go:201] 
	W1008 10:44:08.588208    7235 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:44:08.588226    7235 out.go:270] * 
	W1008 10:44:08.590103    7235 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:44:08.600301    7235 out.go:201] 
	
	
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1238: out/minikube-darwin-arm64 -p functional-099000 logs failed: exit status 83
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-430000 | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT |                     |
|         | -p download-only-430000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT | 08 Oct 24 10:42 PDT |
| delete  | -p download-only-430000                                                  | download-only-430000 | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT | 08 Oct 24 10:42 PDT |
| start   | -o=json --download-only                                                  | download-only-500000 | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT |                     |
|         | -p download-only-500000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
| delete  | -p download-only-500000                                                  | download-only-500000 | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
| delete  | -p download-only-430000                                                  | download-only-430000 | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
| delete  | -p download-only-500000                                                  | download-only-500000 | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
| start   | --download-only -p                                                       | binary-mirror-678000 | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | binary-mirror-678000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51023                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-678000                                                  | binary-mirror-678000 | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
| addons  | disable dashboard -p                                                     | addons-147000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | addons-147000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-147000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | addons-147000                                                            |                      |         |         |                     |                     |
| start   | -p addons-147000 --wait=true                                             | addons-147000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-147000                                                         | addons-147000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
| start   | -p nospam-757000 -n=1 --memory=2250 --wait=false                         | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-757000                                                         | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
| start   | -p functional-099000                                                     | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-099000                                                     | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-099000 cache add                                              | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-099000 cache add                                              | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-099000 cache add                                              | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-099000 cache add                                              | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
|         | minikube-local-cache-test:functional-099000                              |                      |         |         |                     |                     |
| cache   | functional-099000 cache delete                                           | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
|         | minikube-local-cache-test:functional-099000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
| ssh     | functional-099000 ssh sudo                                               | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-099000                                                        | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-099000 ssh                                                    | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-099000 cache reload                                           | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
| ssh     | functional-099000 ssh                                                    | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-099000 kubectl --                                             | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
|         | --context functional-099000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-099000                                                     | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/10/08 10:44:03
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1008 10:44:03.440070    7235 out.go:345] Setting OutFile to fd 1 ...
I1008 10:44:03.440246    7235 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:03.440248    7235 out.go:358] Setting ErrFile to fd 2...
I1008 10:44:03.440249    7235 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:03.441052    7235 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
I1008 10:44:03.442662    7235 out.go:352] Setting JSON to false
I1008 10:44:03.461053    7235 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4413,"bootTime":1728405030,"procs":556,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1008 10:44:03.461137    7235 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1008 10:44:03.467877    7235 out.go:177] * [functional-099000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1008 10:44:03.476784    7235 out.go:177]   - MINIKUBE_LOCATION=19774
I1008 10:44:03.476786    7235 notify.go:220] Checking for updates...
I1008 10:44:03.485666    7235 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
I1008 10:44:03.489721    7235 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1008 10:44:03.492781    7235 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1008 10:44:03.495663    7235 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
I1008 10:44:03.498739    7235 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1008 10:44:03.502045    7235 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1008 10:44:03.502098    7235 driver.go:394] Setting default libvirt URI to qemu:///system
I1008 10:44:03.506708    7235 out.go:177] * Using the qemu2 driver based on existing profile
I1008 10:44:03.513700    7235 start.go:297] selected driver: qemu2
I1008 10:44:03.513704    7235 start.go:901] validating driver "qemu2" against &{Name:functional-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1008 10:44:03.513765    7235 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1008 10:44:03.516344    7235 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1008 10:44:03.516370    7235 cni.go:84] Creating CNI manager for ""
I1008 10:44:03.516400    7235 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1008 10:44:03.516454    7235 start.go:340] cluster config:
{Name:functional-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-099000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1008 10:44:03.521362    7235 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1008 10:44:03.528741    7235 out.go:177] * Starting "functional-099000" primary control-plane node in "functional-099000" cluster
I1008 10:44:03.531680    7235 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1008 10:44:03.531700    7235 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I1008 10:44:03.531710    7235 cache.go:56] Caching tarball of preloaded images
I1008 10:44:03.531808    7235 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1008 10:44:03.531812    7235 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I1008 10:44:03.531891    7235 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/functional-099000/config.json ...
I1008 10:44:03.532289    7235 start.go:360] acquireMachinesLock for functional-099000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1008 10:44:03.532337    7235 start.go:364] duration metric: took 43.708µs to acquireMachinesLock for "functional-099000"
I1008 10:44:03.532348    7235 start.go:96] Skipping create...Using existing machine configuration
I1008 10:44:03.532350    7235 fix.go:54] fixHost starting: 
I1008 10:44:03.532469    7235 fix.go:112] recreateIfNeeded on functional-099000: state=Stopped err=<nil>
W1008 10:44:03.532477    7235 fix.go:138] unexpected machine state, will restart: <nil>
I1008 10:44:03.540555    7235 out.go:177] * Restarting existing qemu2 VM for "functional-099000" ...
I1008 10:44:03.544713    7235 qemu.go:418] Using hvf for hardware acceleration
I1008 10:44:03.544757    7235 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a1:78:71:0e:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/disk.qcow2
I1008 10:44:03.547101    7235 main.go:141] libmachine: STDOUT: 
I1008 10:44:03.547117    7235 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1008 10:44:03.547145    7235 fix.go:56] duration metric: took 14.792125ms for fixHost
I1008 10:44:03.547149    7235 start.go:83] releasing machines lock for "functional-099000", held for 14.808833ms
W1008 10:44:03.547154    7235 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1008 10:44:03.547195    7235 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1008 10:44:03.547199    7235 start.go:729] Will try again in 5 seconds ...
I1008 10:44:08.549401    7235 start.go:360] acquireMachinesLock for functional-099000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1008 10:44:08.549891    7235 start.go:364] duration metric: took 419.083µs to acquireMachinesLock for "functional-099000"
I1008 10:44:08.550028    7235 start.go:96] Skipping create...Using existing machine configuration
I1008 10:44:08.550045    7235 fix.go:54] fixHost starting: 
I1008 10:44:08.550868    7235 fix.go:112] recreateIfNeeded on functional-099000: state=Stopped err=<nil>
W1008 10:44:08.550890    7235 fix.go:138] unexpected machine state, will restart: <nil>
I1008 10:44:08.560269    7235 out.go:177] * Restarting existing qemu2 VM for "functional-099000" ...
I1008 10:44:08.565359    7235 qemu.go:418] Using hvf for hardware acceleration
I1008 10:44:08.565624    7235 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a1:78:71:0e:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/disk.qcow2
I1008 10:44:08.576803    7235 main.go:141] libmachine: STDOUT: 
I1008 10:44:08.576850    7235 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1008 10:44:08.576946    7235 fix.go:56] duration metric: took 26.907167ms for fixHost
I1008 10:44:08.576962    7235 start.go:83] releasing machines lock for "functional-099000", held for 27.049041ms
W1008 10:44:08.577162    7235 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-099000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1008 10:44:08.584391    7235 out.go:201] 
W1008 10:44:08.588208    7235 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1008 10:44:08.588226    7235 out.go:270] * 
W1008 10:44:08.590103    7235 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1008 10:44:08.600301    7235 out.go:201] 

                                                
                                                

                                                
                                                
* The control-plane node functional-099000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-099000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 logs --file /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialLogsFileCmd1732569363/001/logs.txt
functional_test.go:1228: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-430000 | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT |                     |
|         | -p download-only-430000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT | 08 Oct 24 10:42 PDT |
| delete  | -p download-only-430000                                                  | download-only-430000 | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT | 08 Oct 24 10:42 PDT |
| start   | -o=json --download-only                                                  | download-only-500000 | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT |                     |
|         | -p download-only-500000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.31.1                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
| delete  | -p download-only-500000                                                  | download-only-500000 | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
| delete  | -p download-only-430000                                                  | download-only-430000 | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
| delete  | -p download-only-500000                                                  | download-only-500000 | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
| start   | --download-only -p                                                       | binary-mirror-678000 | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | binary-mirror-678000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:51023                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-678000                                                  | binary-mirror-678000 | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
| addons  | disable dashboard -p                                                     | addons-147000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | addons-147000                                                            |                      |         |         |                     |                     |
| addons  | enable dashboard -p                                                      | addons-147000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | addons-147000                                                            |                      |         |         |                     |                     |
| start   | -p addons-147000 --wait=true                                             | addons-147000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --addons=volcano                                           |                      |         |         |                     |                     |
|         | --driver=qemu2  --addons=ingress                                         |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
| delete  | -p addons-147000                                                         | addons-147000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
| start   | -p nospam-757000 -n=1 --memory=2250 --wait=false                         | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | --log_dir=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-757000 --log_dir                                                  | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
|         | /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-757000                                                         | nospam-757000        | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
| start   | -p functional-099000                                                     | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-099000                                                     | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-099000 cache add                                              | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-099000 cache add                                              | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-099000 cache add                                              | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:43 PDT | 08 Oct 24 10:43 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-099000 cache add                                              | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
|         | minikube-local-cache-test:functional-099000                              |                      |         |         |                     |                     |
| cache   | functional-099000 cache delete                                           | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
|         | minikube-local-cache-test:functional-099000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
| ssh     | functional-099000 ssh sudo                                               | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-099000                                                        | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-099000 ssh                                                    | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-099000 cache reload                                           | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
| ssh     | functional-099000 ssh                                                    | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT | 08 Oct 24 10:44 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-099000 kubectl --                                             | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
|         | --context functional-099000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-099000                                                     | functional-099000    | jenkins | v1.34.0 | 08 Oct 24 10:44 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/10/08 10:44:03
Running on machine: MacOS-M1-Agent-1
Binary: Built with gc go1.23.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1008 10:44:03.440070    7235 out.go:345] Setting OutFile to fd 1 ...
I1008 10:44:03.440246    7235 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:03.440248    7235 out.go:358] Setting ErrFile to fd 2...
I1008 10:44:03.440249    7235 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:03.441052    7235 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
I1008 10:44:03.442662    7235 out.go:352] Setting JSON to false
I1008 10:44:03.461053    7235 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4413,"bootTime":1728405030,"procs":556,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
W1008 10:44:03.461137    7235 start.go:137] gopshost.Virtualization returned error: not implemented yet
I1008 10:44:03.467877    7235 out.go:177] * [functional-099000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
I1008 10:44:03.476784    7235 out.go:177]   - MINIKUBE_LOCATION=19774
I1008 10:44:03.476786    7235 notify.go:220] Checking for updates...
I1008 10:44:03.485666    7235 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
I1008 10:44:03.489721    7235 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I1008 10:44:03.492781    7235 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I1008 10:44:03.495663    7235 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
I1008 10:44:03.498739    7235 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I1008 10:44:03.502045    7235 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1008 10:44:03.502098    7235 driver.go:394] Setting default libvirt URI to qemu:///system
I1008 10:44:03.506708    7235 out.go:177] * Using the qemu2 driver based on existing profile
I1008 10:44:03.513700    7235 start.go:297] selected driver: qemu2
I1008 10:44:03.513704    7235 start.go:901] validating driver "qemu2" against &{Name:functional-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:functional-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1008 10:44:03.513765    7235 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I1008 10:44:03.516344    7235 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I1008 10:44:03.516370    7235 cni.go:84] Creating CNI manager for ""
I1008 10:44:03.516400    7235 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I1008 10:44:03.516454    7235 start.go:340] cluster config:
{Name:functional-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-099000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I1008 10:44:03.521362    7235 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1008 10:44:03.528741    7235 out.go:177] * Starting "functional-099000" primary control-plane node in "functional-099000" cluster
I1008 10:44:03.531680    7235 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1008 10:44:03.531700    7235 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
I1008 10:44:03.531710    7235 cache.go:56] Caching tarball of preloaded images
I1008 10:44:03.531808    7235 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I1008 10:44:03.531812    7235 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
I1008 10:44:03.531891    7235 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/functional-099000/config.json ...
I1008 10:44:03.532289    7235 start.go:360] acquireMachinesLock for functional-099000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1008 10:44:03.532337    7235 start.go:364] duration metric: took 43.708µs to acquireMachinesLock for "functional-099000"
I1008 10:44:03.532348    7235 start.go:96] Skipping create...Using existing machine configuration
I1008 10:44:03.532350    7235 fix.go:54] fixHost starting: 
I1008 10:44:03.532469    7235 fix.go:112] recreateIfNeeded on functional-099000: state=Stopped err=<nil>
W1008 10:44:03.532477    7235 fix.go:138] unexpected machine state, will restart: <nil>
I1008 10:44:03.540555    7235 out.go:177] * Restarting existing qemu2 VM for "functional-099000" ...
I1008 10:44:03.544713    7235 qemu.go:418] Using hvf for hardware acceleration
I1008 10:44:03.544757    7235 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a1:78:71:0e:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/disk.qcow2
I1008 10:44:03.547101    7235 main.go:141] libmachine: STDOUT: 
I1008 10:44:03.547117    7235 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1008 10:44:03.547145    7235 fix.go:56] duration metric: took 14.792125ms for fixHost
I1008 10:44:03.547149    7235 start.go:83] releasing machines lock for "functional-099000", held for 14.808833ms
W1008 10:44:03.547154    7235 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1008 10:44:03.547195    7235 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1008 10:44:03.547199    7235 start.go:729] Will try again in 5 seconds ...
I1008 10:44:08.549401    7235 start.go:360] acquireMachinesLock for functional-099000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I1008 10:44:08.549891    7235 start.go:364] duration metric: took 419.083µs to acquireMachinesLock for "functional-099000"
I1008 10:44:08.550028    7235 start.go:96] Skipping create...Using existing machine configuration
I1008 10:44:08.550045    7235 fix.go:54] fixHost starting: 
I1008 10:44:08.550868    7235 fix.go:112] recreateIfNeeded on functional-099000: state=Stopped err=<nil>
W1008 10:44:08.550890    7235 fix.go:138] unexpected machine state, will restart: <nil>
I1008 10:44:08.560269    7235 out.go:177] * Restarting existing qemu2 VM for "functional-099000" ...
I1008 10:44:08.565359    7235 qemu.go:418] Using hvf for hardware acceleration
I1008 10:44:08.565624    7235 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:a1:78:71:0e:ca -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/functional-099000/disk.qcow2
I1008 10:44:08.576803    7235 main.go:141] libmachine: STDOUT: 
I1008 10:44:08.576850    7235 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I1008 10:44:08.576946    7235 fix.go:56] duration metric: took 26.907167ms for fixHost
I1008 10:44:08.576962    7235 start.go:83] releasing machines lock for "functional-099000", held for 27.049041ms
W1008 10:44:08.577162    7235 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p functional-099000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I1008 10:44:08.584391    7235 out.go:201] 
W1008 10:44:08.588208    7235 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W1008 10:44:08.588226    7235 out.go:270] * 
W1008 10:44:08.590103    7235 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1008 10:44:08.600301    7235 out.go:201] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.07s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-099000 apply -f testdata/invalidsvc.yaml
functional_test.go:2321: (dbg) Non-zero exit: kubectl --context functional-099000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.645625ms)

                                                
                                                
** stderr ** 
	error: context "functional-099000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2323: kubectl --context functional-099000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-099000 --alsologtostderr -v=1]
functional_test.go:918: output didn't produce a URL
functional_test.go:910: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-099000 --alsologtostderr -v=1] ...
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-099000 --alsologtostderr -v=1] stdout:
functional_test.go:910: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-099000 --alsologtostderr -v=1] stderr:
I1008 10:44:48.296116    7551 out.go:345] Setting OutFile to fd 1 ...
I1008 10:44:48.296316    7551 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:48.296318    7551 out.go:358] Setting ErrFile to fd 2...
I1008 10:44:48.296321    7551 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:48.296453    7551 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
I1008 10:44:48.296671    7551 mustload.go:65] Loading cluster: functional-099000
I1008 10:44:48.296902    7551 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1008 10:44:48.299903    7551 out.go:177] * The control-plane node functional-099000 host is not running: state=Stopped
I1008 10:44:48.303793    7551 out.go:177]   To start a cluster, run: "minikube start -p functional-099000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (45.828666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 status
functional_test.go:854: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 status: exit status 7 (33.903125ms)

                                                
                                                
-- stdout --
	functional-099000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:856: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-099000 status" : exit status 7
functional_test.go:860: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:860: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (33.731042ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:862: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-099000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:872: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 status -o json
functional_test.go:872: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 status -o json: exit status 7 (33.713042ms)

                                                
                                                
-- stdout --
	{"Name":"functional-099000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:874: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-099000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (33.838792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-099000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1627: (dbg) Non-zero exit: kubectl --context functional-099000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.638292ms)

                                                
                                                
** stderr ** 
	error: context "functional-099000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1633: failed to create hello-node deployment with this command "kubectl --context functional-099000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1598: service test failed - dumping debug information
functional_test.go:1599: -----------------------service failure post-mortem--------------------------------
functional_test.go:1602: (dbg) Run:  kubectl --context functional-099000 describe po hello-node-connect
functional_test.go:1602: (dbg) Non-zero exit: kubectl --context functional-099000 describe po hello-node-connect: exit status 1 (26.544208ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-099000

                                                
                                                
** /stderr **
functional_test.go:1604: "kubectl --context functional-099000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1606: hello-node pod describe:
functional_test.go:1608: (dbg) Run:  kubectl --context functional-099000 logs -l app=hello-node-connect
functional_test.go:1608: (dbg) Non-zero exit: kubectl --context functional-099000 logs -l app=hello-node-connect: exit status 1 (26.148417ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-099000

                                                
                                                
** /stderr **
functional_test.go:1610: "kubectl --context functional-099000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1612: hello-node logs:
functional_test.go:1614: (dbg) Run:  kubectl --context functional-099000 describe svc hello-node-connect
functional_test.go:1614: (dbg) Non-zero exit: kubectl --context functional-099000 describe svc hello-node-connect: exit status 1 (26.320875ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-099000

                                                
                                                
** /stderr **
functional_test.go:1616: "kubectl --context functional-099000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1618: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (33.881208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-099000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (34.521292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "echo hello"
functional_test.go:1725: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "echo hello": exit status 83 (46.454208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1730: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-099000 ssh \"echo hello\"" : exit status 83
functional_test.go:1734: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-099000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-099000\"\n"*. args "out/minikube-darwin-arm64 -p functional-099000 ssh \"echo hello\""
functional_test.go:1742: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "cat /etc/hostname"
functional_test.go:1742: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "cat /etc/hostname": exit status 83 (45.757583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1748: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-099000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1752: expected minikube ssh command output to be -"functional-099000"- but got *"* The control-plane node functional-099000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-099000\"\n"*. args "out/minikube-darwin-arm64 -p functional-099000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (34.11875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (59.552875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-099000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh -n functional-099000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh -n functional-099000 "sudo cat /home/docker/cp-test.txt": exit status 83 (48.053167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-099000 ssh -n functional-099000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-099000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-099000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 cp functional-099000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2579937284/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 cp functional-099000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2579937284/001/cp-test.txt: exit status 83 (47.420958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-099000 cp functional-099000:/home/docker/cp-test.txt /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2579937284/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh -n functional-099000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh -n functional-099000 "sudo cat /home/docker/cp-test.txt": exit status 83 (45.988833ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-099000 ssh -n functional-099000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelCpCmd2579937284/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-099000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-099000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (50.56725ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-099000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh -n functional-099000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh -n functional-099000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (44.1515ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-099000 ssh -n functional-099000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-099000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-099000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/6907/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "sudo cat /etc/test/nested/copy/6907/hosts"
functional_test.go:1931: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "sudo cat /etc/test/nested/copy/6907/hosts": exit status 83 (45.405625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1933: out/minikube-darwin-arm64 -p functional-099000 ssh "sudo cat /etc/test/nested/copy/6907/hosts" failed: exit status 83
functional_test.go:1936: file sync test content: * The control-plane node functional-099000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-099000"
functional_test.go:1946: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-099000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-099000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (34.311333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/6907.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "sudo cat /etc/ssl/certs/6907.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "sudo cat /etc/ssl/certs/6907.pem": exit status 83 (45.302375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/6907.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-099000 ssh \"sudo cat /etc/ssl/certs/6907.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/6907.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-099000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-099000"
  	"""
  )
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/6907.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "sudo cat /usr/share/ca-certificates/6907.pem"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "sudo cat /usr/share/ca-certificates/6907.pem": exit status 83 (43.572667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/usr/share/ca-certificates/6907.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-099000 ssh \"sudo cat /usr/share/ca-certificates/6907.pem\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/6907.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-099000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-099000"
  	"""
  )
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1973: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (52.604ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1975: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-099000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1981: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-099000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-099000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/69072.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "sudo cat /etc/ssl/certs/69072.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "sudo cat /etc/ssl/certs/69072.pem": exit status 83 (55.454875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/69072.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-099000 ssh \"sudo cat /etc/ssl/certs/69072.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/69072.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-099000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-099000"
  	"""
  )
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/69072.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "sudo cat /usr/share/ca-certificates/69072.pem"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "sudo cat /usr/share/ca-certificates/69072.pem": exit status 83 (55.896167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/usr/share/ca-certificates/69072.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-099000 ssh \"sudo cat /usr/share/ca-certificates/69072.pem\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/69072.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-099000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-099000"
  	"""
  )
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2000: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (45.683541ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:2002: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-099000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2008: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-099000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-099000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (35.006417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-099000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:219: (dbg) Non-zero exit: kubectl --context functional-099000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (26.675584ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-099000

                                                
                                                
** /stderr **
functional_test.go:221: failed to 'kubectl get nodes' with args "kubectl --context functional-099000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:227: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-099000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-099000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-099000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-099000

                                                
                                                
** /stderr **
functional_test.go:227: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-099000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-099000 -n functional-099000: exit status 7 (33.853625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-099000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "sudo systemctl is-active crio": exit status 83 (42.527584ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:2030: output of 
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2033: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-099000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-099000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 version -o=json --components
functional_test.go:2270: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 version -o=json --components: exit status 83 (45.990417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:2272: error version: exit status 83
functional_test.go:2277: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-099000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-099000"
functional_test.go:2277: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-099000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-099000"
functional_test.go:2277: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-099000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-099000"
functional_test.go:2277: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-099000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-099000"
functional_test.go:2277: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-099000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-099000"
functional_test.go:2277: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-099000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-099000"
functional_test.go:2277: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-099000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-099000"
functional_test.go:2277: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-099000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-099000"
functional_test.go:2277: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-099000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-099000"
functional_test.go:2277: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-099000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-099000"
--- FAIL: TestFunctional/parallel/Version/components (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-099000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-099000 image ls --format short --alsologtostderr:
I1008 10:44:48.731315    7566 out.go:345] Setting OutFile to fd 1 ...
I1008 10:44:48.731492    7566 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:48.731496    7566 out.go:358] Setting ErrFile to fd 2...
I1008 10:44:48.731498    7566 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:48.731630    7566 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
I1008 10:44:48.732046    7566 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1008 10:44:48.732108    7566 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-099000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-099000 image ls --format table --alsologtostderr:
I1008 10:44:48.977545    7578 out.go:345] Setting OutFile to fd 1 ...
I1008 10:44:48.977748    7578 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:48.977752    7578 out.go:358] Setting ErrFile to fd 2...
I1008 10:44:48.977755    7578 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:48.977891    7578 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
I1008 10:44:48.978321    7578 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1008 10:44:48.978387    7578 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
I1008 10:45:00.151754    6907 retry.go:31] will retry after 18.334405912s: Temporary Error: Get "http:": http: no Host in request URL
I1008 10:45:18.488543    6907 retry.go:31] will retry after 46.973058988s: Temporary Error: Get "http:": http: no Host in request URL
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-099000 image ls --format json --alsologtostderr:
[]
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-099000 image ls --format json --alsologtostderr:
I1008 10:44:48.938687    7576 out.go:345] Setting OutFile to fd 1 ...
I1008 10:44:48.938880    7576 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:48.938883    7576 out.go:358] Setting ErrFile to fd 2...
I1008 10:44:48.938886    7576 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:48.939057    7576 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
I1008 10:44:48.939495    7576 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1008 10:44:48.939556    7576 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-099000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-099000 image ls --format yaml --alsologtostderr:
I1008 10:44:48.771177    7568 out.go:345] Setting OutFile to fd 1 ...
I1008 10:44:48.771356    7568 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:48.771359    7568 out.go:358] Setting ErrFile to fd 2...
I1008 10:44:48.771361    7568 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:48.771486    7568 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
I1008 10:44:48.771950    7568 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1008 10:44:48.772008    7568 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:275: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh pgrep buildkitd: exit status 83 (46.866167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:315: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image build -t localhost/my-image:functional-099000 testdata/build --alsologtostderr
functional_test.go:323: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-099000 image build -t localhost/my-image:functional-099000 testdata/build --alsologtostderr:
I1008 10:44:48.858178    7572 out.go:345] Setting OutFile to fd 1 ...
I1008 10:44:48.858364    7572 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:48.858367    7572 out.go:358] Setting ErrFile to fd 2...
I1008 10:44:48.858370    7572 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:48.858491    7572 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
I1008 10:44:48.858944    7572 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1008 10:44:48.859415    7572 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1008 10:44:48.859640    7572 build_images.go:133] succeeded building to: 
I1008 10:44:48.859646    7572 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image ls
functional_test.go:446: expected "localhost/my-image:functional-099000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-099000 docker-env) && out/minikube-darwin-arm64 status -p functional-099000"
functional_test.go:499: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-099000 docker-env) && out/minikube-darwin-arm64 status -p functional-099000": exit status 1 (50.593292ms)
functional_test.go:505: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 update-context --alsologtostderr -v=2: exit status 83 (46.799417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:44:48.590194    7560 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:44:48.590387    7560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:44:48.590390    7560 out.go:358] Setting ErrFile to fd 2...
	I1008 10:44:48.590392    7560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:44:48.590532    7560 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:44:48.590774    7560 mustload.go:65] Loading cluster: functional-099000
	I1008 10:44:48.590968    7560 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:44:48.595555    7560 out.go:177] * The control-plane node functional-099000 host is not running: state=Stopped
	I1008 10:44:48.599549    7560 out.go:177]   To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-099000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-099000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-099000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 update-context --alsologtostderr -v=2: exit status 83 (46.369042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:44:48.684737    7564 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:44:48.684910    7564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:44:48.684914    7564 out.go:358] Setting ErrFile to fd 2...
	I1008 10:44:48.684916    7564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:44:48.685041    7564 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:44:48.685259    7564 mustload.go:65] Loading cluster: functional-099000
	I1008 10:44:48.685480    7564 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:44:48.689614    7564 out.go:177] * The control-plane node functional-099000 host is not running: state=Stopped
	I1008 10:44:48.693567    7564 out.go:177]   To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-099000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-099000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-099000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 update-context --alsologtostderr -v=2
functional_test.go:2119: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 update-context --alsologtostderr -v=2: exit status 83 (46.540292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:44:48.637340    7562 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:44:48.637522    7562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:44:48.637525    7562 out.go:358] Setting ErrFile to fd 2...
	I1008 10:44:48.637527    7562 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:44:48.637675    7562 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:44:48.637917    7562 mustload.go:65] Loading cluster: functional-099000
	I1008 10:44:48.638138    7562 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:44:48.642585    7562 out.go:177] * The control-plane node functional-099000 host is not running: state=Stopped
	I1008 10:44:48.646549    7562 out.go:177]   To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
** /stderr **
functional_test.go:2121: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-099000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2126: update-context: got="* The control-plane node functional-099000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-099000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image load --daemon kicbase/echo-server:functional-099000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-099000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-099000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1437: (dbg) Non-zero exit: kubectl --context functional-099000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (29.285584ms)

                                                
                                                
** stderr ** 
	error: context "functional-099000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1443: failed to create hello-node deployment with this command "kubectl --context functional-099000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 service list
functional_test.go:1459: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 service list: exit status 83 (47.89125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1461: failed to do service list. args "out/minikube-darwin-arm64 -p functional-099000 service list" : exit status 83
functional_test.go:1464: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-099000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-099000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image load --daemon kicbase/echo-server:functional-099000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-099000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 service list -o json
functional_test.go:1489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 service list -o json: exit status 83 (57.645458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1491: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-099000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 service --namespace=default --https --url hello-node
functional_test.go:1509: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 service --namespace=default --https --url hello-node: exit status 83 (46.670375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1511: failed to get service url. args "out/minikube-darwin-arm64 -p functional-099000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 service hello-node --url --format={{.IP}}
functional_test.go:1540: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 service hello-node --url --format={{.IP}}: exit status 83 (46.854709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1542: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-099000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1548: "* The control-plane node functional-099000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-099000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 service hello-node --url
functional_test.go:1559: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 service hello-node --url: exit status 83 (49.573458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test.go:1561: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-099000 service hello-node --url": exit status 83
functional_test.go:1565: found endpoint for hello-node: * The control-plane node functional-099000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-099000"
functional_test.go:1569: failed to parse "* The control-plane node functional-099000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-099000\"": parse "* The control-plane node functional-099000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-099000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-099000
functional_test.go:245: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image load --daemon kicbase/echo-server:functional-099000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-099000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-099000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-099000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I1008 10:44:11.759496    7365 out.go:345] Setting OutFile to fd 1 ...
I1008 10:44:11.759714    7365 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:11.759716    7365 out.go:358] Setting ErrFile to fd 2...
I1008 10:44:11.759719    7365 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:44:11.759848    7365 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
I1008 10:44:11.760079    7365 mustload.go:65] Loading cluster: functional-099000
I1008 10:44:11.760310    7365 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1008 10:44:11.765493    7365 out.go:177] * The control-plane node functional-099000 host is not running: state=Stopped
I1008 10:44:11.777434    7365 out.go:177]   To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
stdout: * The control-plane node functional-099000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-099000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-099000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 7364: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-099000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-099000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-099000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-099000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-099000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-099000": client config: context "functional-099000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (113.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1008 10:44:11.844217    6907 retry.go:31] will retry after 4.067509918s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-099000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-099000 get svc nginx-svc: exit status 1 (69.418958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-099000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-099000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (113.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image save kicbase/echo-server:functional-099000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/Users/jenkins/workspace/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-099000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
I1008 10:46:05.553460    6907 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.029599084s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 12 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
I1008 10:46:30.693595    6907 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1008 10:46:40.694808    6907 retry.go:31] will retry after 3.696368716s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I1008 10:46:54.394694    6907 retry.go:31] will retry after 6.236783135s: Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": dial tcp: lookup nginx-svc.default.svc.cluster.local. on 8.8.8.8:53: read udp 207.254.73.72:54762->10.96.0.10:53: i/o timeout
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (30.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-500000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-500000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.766328166s)

                                                
                                                
-- stdout --
	* [ha-500000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-500000" primary control-plane node in "ha-500000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-500000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:47:01.075882    7611 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:47:01.076031    7611 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:47:01.076035    7611 out.go:358] Setting ErrFile to fd 2...
	I1008 10:47:01.076037    7611 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:47:01.076164    7611 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:47:01.077330    7611 out.go:352] Setting JSON to false
	I1008 10:47:01.096040    7611 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4591,"bootTime":1728405030,"procs":560,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:47:01.096178    7611 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:47:01.101511    7611 out.go:177] * [ha-500000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:47:01.109379    7611 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:47:01.109416    7611 notify.go:220] Checking for updates...
	I1008 10:47:01.116520    7611 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:47:01.117951    7611 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:47:01.121415    7611 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:47:01.124500    7611 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:47:01.127494    7611 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:47:01.130680    7611 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:47:01.134419    7611 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 10:47:01.141439    7611 start.go:297] selected driver: qemu2
	I1008 10:47:01.141446    7611 start.go:901] validating driver "qemu2" against <nil>
	I1008 10:47:01.141451    7611 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:47:01.143910    7611 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 10:47:01.147469    7611 out.go:177] * Automatically selected the socket_vmnet network
	I1008 10:47:01.150585    7611 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 10:47:01.150614    7611 cni.go:84] Creating CNI manager for ""
	I1008 10:47:01.150639    7611 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 10:47:01.150647    7611 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 10:47:01.150687    7611 start.go:340] cluster config:
	{Name:ha-500000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-500000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:47:01.155363    7611 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:47:01.163478    7611 out.go:177] * Starting "ha-500000" primary control-plane node in "ha-500000" cluster
	I1008 10:47:01.167473    7611 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 10:47:01.167496    7611 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 10:47:01.167516    7611 cache.go:56] Caching tarball of preloaded images
	I1008 10:47:01.167621    7611 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 10:47:01.167632    7611 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 10:47:01.167863    7611 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/ha-500000/config.json ...
	I1008 10:47:01.167876    7611 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/ha-500000/config.json: {Name:mkabff54825e43171c62c9a74de60dbc183d7b42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:47:01.168135    7611 start.go:360] acquireMachinesLock for ha-500000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:47:01.168191    7611 start.go:364] duration metric: took 49.958µs to acquireMachinesLock for "ha-500000"
	I1008 10:47:01.168204    7611 start.go:93] Provisioning new machine with config: &{Name:ha-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-500000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 10:47:01.168231    7611 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 10:47:01.171469    7611 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 10:47:01.188946    7611 start.go:159] libmachine.API.Create for "ha-500000" (driver="qemu2")
	I1008 10:47:01.188973    7611 client.go:168] LocalClient.Create starting
	I1008 10:47:01.189041    7611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 10:47:01.189080    7611 main.go:141] libmachine: Decoding PEM data...
	I1008 10:47:01.189093    7611 main.go:141] libmachine: Parsing certificate...
	I1008 10:47:01.189139    7611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 10:47:01.189169    7611 main.go:141] libmachine: Decoding PEM data...
	I1008 10:47:01.189178    7611 main.go:141] libmachine: Parsing certificate...
	I1008 10:47:01.189570    7611 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 10:47:01.335863    7611 main.go:141] libmachine: Creating SSH key...
	I1008 10:47:01.368306    7611 main.go:141] libmachine: Creating Disk image...
	I1008 10:47:01.368312    7611 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 10:47:01.368517    7611 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2
	I1008 10:47:01.378255    7611 main.go:141] libmachine: STDOUT: 
	I1008 10:47:01.378272    7611 main.go:141] libmachine: STDERR: 
	I1008 10:47:01.378332    7611 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2 +20000M
	I1008 10:47:01.386690    7611 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 10:47:01.386711    7611 main.go:141] libmachine: STDERR: 
	I1008 10:47:01.386730    7611 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2
	I1008 10:47:01.386735    7611 main.go:141] libmachine: Starting QEMU VM...
	I1008 10:47:01.386745    7611 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:47:01.386782    7611 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:ef:f0:24:9e:37 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2
	I1008 10:47:01.388581    7611 main.go:141] libmachine: STDOUT: 
	I1008 10:47:01.388592    7611 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:47:01.388613    7611 client.go:171] duration metric: took 199.635167ms to LocalClient.Create
	I1008 10:47:03.390229    7611 start.go:128] duration metric: took 2.221980041s to createHost
	I1008 10:47:03.390354    7611 start.go:83] releasing machines lock for "ha-500000", held for 2.222155667s
	W1008 10:47:03.390419    7611 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:47:03.403525    7611 out.go:177] * Deleting "ha-500000" in qemu2 ...
	W1008 10:47:03.431107    7611 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:47:03.431139    7611 start.go:729] Will try again in 5 seconds ...
	I1008 10:47:08.433372    7611 start.go:360] acquireMachinesLock for ha-500000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:47:08.433852    7611 start.go:364] duration metric: took 404.458µs to acquireMachinesLock for "ha-500000"
	I1008 10:47:08.433961    7611 start.go:93] Provisioning new machine with config: &{Name:ha-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.1 ClusterName:ha-500000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 10:47:08.434276    7611 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 10:47:08.442965    7611 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 10:47:08.493467    7611 start.go:159] libmachine.API.Create for "ha-500000" (driver="qemu2")
	I1008 10:47:08.493517    7611 client.go:168] LocalClient.Create starting
	I1008 10:47:08.493658    7611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 10:47:08.493748    7611 main.go:141] libmachine: Decoding PEM data...
	I1008 10:47:08.493767    7611 main.go:141] libmachine: Parsing certificate...
	I1008 10:47:08.493839    7611 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 10:47:08.493901    7611 main.go:141] libmachine: Decoding PEM data...
	I1008 10:47:08.493915    7611 main.go:141] libmachine: Parsing certificate...
	I1008 10:47:08.494480    7611 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 10:47:08.654459    7611 main.go:141] libmachine: Creating SSH key...
	I1008 10:47:08.743789    7611 main.go:141] libmachine: Creating Disk image...
	I1008 10:47:08.743795    7611 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 10:47:08.744002    7611 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2
	I1008 10:47:08.754030    7611 main.go:141] libmachine: STDOUT: 
	I1008 10:47:08.754050    7611 main.go:141] libmachine: STDERR: 
	I1008 10:47:08.754111    7611 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2 +20000M
	I1008 10:47:08.762665    7611 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 10:47:08.762679    7611 main.go:141] libmachine: STDERR: 
	I1008 10:47:08.762689    7611 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2
	I1008 10:47:08.762693    7611 main.go:141] libmachine: Starting QEMU VM...
	I1008 10:47:08.762699    7611 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:47:08.762738    7611 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:a4:a9:51:27:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2
	I1008 10:47:08.764473    7611 main.go:141] libmachine: STDOUT: 
	I1008 10:47:08.764485    7611 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:47:08.764499    7611 client.go:171] duration metric: took 270.9765ms to LocalClient.Create
	I1008 10:47:10.766671    7611 start.go:128] duration metric: took 2.332367667s to createHost
	I1008 10:47:10.766733    7611 start.go:83] releasing machines lock for "ha-500000", held for 2.332862833s
	W1008 10:47:10.767134    7611 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-500000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-500000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:47:10.779857    7611 out.go:201] 
	W1008 10:47:10.782951    7611 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:47:10.782980    7611 out.go:270] * 
	* 
	W1008 10:47:10.785950    7611 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:47:10.795830    7611 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-500000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (73.06375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (102.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (64.771875ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-500000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- rollout status deployment/busybox: exit status 1 (61.528667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.24175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:47:11.073211    6907 retry.go:31] will retry after 939.647639ms: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.020875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:47:12.123310    6907 retry.go:31] will retry after 1.835013813s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.463542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:47:14.069149    6907 retry.go:31] will retry after 2.365900389s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.687708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:47:16.545065    6907 retry.go:31] will retry after 3.493348804s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.69175ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:47:20.149600    6907 retry.go:31] will retry after 6.631895984s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.464292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:47:26.892480    6907 retry.go:31] will retry after 6.898472586s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.2665ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:47:33.901681    6907 retry.go:31] will retry after 6.237777297s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (110.456542ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:47:40.250904    6907 retry.go:31] will retry after 23.402971924s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.481958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:48:03.762785    6907 retry.go:31] will retry after 15.980400045s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.741416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:48:19.855349    6907 retry.go:31] will retry after 33.345796561s: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (112.379042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.057458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec  -- nslookup kubernetes.io: exit status 1 (62.082791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.069375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.296583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (34.893917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (102.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-500000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.375833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-500000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (34.712334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-500000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-500000 -v=7 --alsologtostderr: exit status 83 (50.029917ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-500000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-500000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:48:53.726738    7700 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:48:53.726956    7700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:53.726959    7700 out.go:358] Setting ErrFile to fd 2...
	I1008 10:48:53.726962    7700 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:53.727090    7700 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:48:53.727346    7700 mustload.go:65] Loading cluster: ha-500000
	I1008 10:48:53.727557    7700 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:48:53.733181    7700 out.go:177] * The control-plane node ha-500000 host is not running: state=Stopped
	I1008 10:48:53.738227    7700 out.go:177]   To start a cluster, run: "minikube start -p ha-500000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-500000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (34.637792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-500000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-500000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.069833ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-500000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-500000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-500000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (34.519458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-500000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-500000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-500000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-500000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-500000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-500000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-500000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-500000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (34.478833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 status --output json -v=7 --alsologtostderr: exit status 7 (34.136459ms)

                                                
                                                
-- stdout --
	{"Name":"ha-500000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:48:53.960483    7712 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:48:53.960687    7712 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:53.960690    7712 out.go:358] Setting ErrFile to fd 2...
	I1008 10:48:53.960693    7712 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:53.960821    7712 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:48:53.960955    7712 out.go:352] Setting JSON to true
	I1008 10:48:53.960966    7712 mustload.go:65] Loading cluster: ha-500000
	I1008 10:48:53.961035    7712 notify.go:220] Checking for updates...
	I1008 10:48:53.961179    7712 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:48:53.961186    7712 status.go:174] checking status of ha-500000 ...
	I1008 10:48:53.961442    7712 status.go:371] ha-500000 host status = "Stopped" (err=<nil>)
	I1008 10:48:53.961445    7712 status.go:384] host is not running, skipping remaining checks
	I1008 10:48:53.961447    7712 status.go:176] ha-500000 status: &{Name:ha-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:335: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-500000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (35.071084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 node stop m02 -v=7 --alsologtostderr: exit status 85 (56.310209ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:48:54.030930    7716 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:48:54.031137    7716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:54.031140    7716 out.go:358] Setting ErrFile to fd 2...
	I1008 10:48:54.031143    7716 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:54.031263    7716 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:48:54.031539    7716 mustload.go:65] Loading cluster: ha-500000
	I1008 10:48:54.031771    7716 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:48:54.036618    7716 out.go:201] 
	W1008 10:48:54.043736    7716 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1008 10:48:54.043741    7716 out.go:270] * 
	* 
	W1008 10:48:54.045735    7716 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:48:54.049528    7716 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-500000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:371: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: exit status 7 (34.929666ms)

                                                
                                                
-- stdout --
	ha-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:48:54.087732    7718 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:48:54.087926    7718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:54.087929    7718 out.go:358] Setting ErrFile to fd 2...
	I1008 10:48:54.087931    7718 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:54.088057    7718 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:48:54.088174    7718 out.go:352] Setting JSON to false
	I1008 10:48:54.088184    7718 mustload.go:65] Loading cluster: ha-500000
	I1008 10:48:54.088228    7718 notify.go:220] Checking for updates...
	I1008 10:48:54.088411    7718 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:48:54.088418    7718 status.go:174] checking status of ha-500000 ...
	I1008 10:48:54.088666    7718 status.go:371] ha-500000 host status = "Stopped" (err=<nil>)
	I1008 10:48:54.088669    7718 status.go:384] host is not running, skipping remaining checks
	I1008 10:48:54.088672    7718 status.go:176] ha-500000 status: &{Name:ha-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": ha-500000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:380: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": ha-500000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": ha-500000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:386: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": ha-500000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (34.737833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-500000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-500000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-500000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-500000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (34.257834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (57.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 node start m02 -v=7 --alsologtostderr: exit status 85 (50.335083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:48:54.244614    7727 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:48:54.244829    7727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:54.244832    7727 out.go:358] Setting ErrFile to fd 2...
	I1008 10:48:54.244834    7727 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:54.244977    7727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:48:54.245232    7727 mustload.go:65] Loading cluster: ha-500000
	I1008 10:48:54.245443    7727 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:48:54.248654    7727 out.go:201] 
	W1008 10:48:54.252561    7727 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W1008 10:48:54.252566    7727 out.go:270] * 
	* 
	W1008 10:48:54.254622    7727 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:48:54.258488    7727 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:424: I1008 10:48:54.244614    7727 out.go:345] Setting OutFile to fd 1 ...
I1008 10:48:54.244829    7727 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:48:54.244832    7727 out.go:358] Setting ErrFile to fd 2...
I1008 10:48:54.244834    7727 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:48:54.244977    7727 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
I1008 10:48:54.245232    7727 mustload.go:65] Loading cluster: ha-500000
I1008 10:48:54.245443    7727 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1008 10:48:54.248654    7727 out.go:201] 
W1008 10:48:54.252561    7727 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W1008 10:48:54.252566    7727 out.go:270] * 
* 
W1008 10:48:54.254622    7727 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1008 10:48:54.258488    7727 out.go:201] 

                                                
                                                
ha_test.go:425: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-500000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: exit status 7 (34.516208ms)

                                                
                                                
-- stdout --
	ha-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:48:54.295203    7729 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:48:54.295395    7729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:54.295398    7729 out.go:358] Setting ErrFile to fd 2...
	I1008 10:48:54.295400    7729 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:54.295524    7729 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:48:54.295660    7729 out.go:352] Setting JSON to false
	I1008 10:48:54.295669    7729 mustload.go:65] Loading cluster: ha-500000
	I1008 10:48:54.295721    7729 notify.go:220] Checking for updates...
	I1008 10:48:54.295883    7729 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:48:54.295891    7729 status.go:174] checking status of ha-500000 ...
	I1008 10:48:54.296147    7729 status.go:371] ha-500000 host status = "Stopped" (err=<nil>)
	I1008 10:48:54.296150    7729 status.go:384] host is not running, skipping remaining checks
	I1008 10:48:54.296152    7729 status.go:176] ha-500000 status: &{Name:ha-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:48:54.297090    6907 retry.go:31] will retry after 571.326639ms: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: exit status 7 (79.108583ms)

                                                
                                                
-- stdout --
	ha-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:48:54.947725    7731 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:48:54.947947    7731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:54.947952    7731 out.go:358] Setting ErrFile to fd 2...
	I1008 10:48:54.947955    7731 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:54.948138    7731 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:48:54.948287    7731 out.go:352] Setting JSON to false
	I1008 10:48:54.948298    7731 mustload.go:65] Loading cluster: ha-500000
	I1008 10:48:54.948328    7731 notify.go:220] Checking for updates...
	I1008 10:48:54.948550    7731 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:48:54.948557    7731 status.go:174] checking status of ha-500000 ...
	I1008 10:48:54.948863    7731 status.go:371] ha-500000 host status = "Stopped" (err=<nil>)
	I1008 10:48:54.948867    7731 status.go:384] host is not running, skipping remaining checks
	I1008 10:48:54.948873    7731 status.go:176] ha-500000 status: &{Name:ha-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:48:54.949912    6907 retry.go:31] will retry after 999.107273ms: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: exit status 7 (78.303458ms)

                                                
                                                
-- stdout --
	ha-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:48:56.027605    7736 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:48:56.027863    7736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:56.027867    7736 out.go:358] Setting ErrFile to fd 2...
	I1008 10:48:56.027870    7736 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:56.028053    7736 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:48:56.028202    7736 out.go:352] Setting JSON to false
	I1008 10:48:56.028215    7736 mustload.go:65] Loading cluster: ha-500000
	I1008 10:48:56.028267    7736 notify.go:220] Checking for updates...
	I1008 10:48:56.028486    7736 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:48:56.028494    7736 status.go:174] checking status of ha-500000 ...
	I1008 10:48:56.028806    7736 status.go:371] ha-500000 host status = "Stopped" (err=<nil>)
	I1008 10:48:56.028811    7736 status.go:384] host is not running, skipping remaining checks
	I1008 10:48:56.028814    7736 status.go:176] ha-500000 status: &{Name:ha-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:48:56.029819    6907 retry.go:31] will retry after 3.155293583s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: exit status 7 (80.4305ms)

                                                
                                                
-- stdout --
	ha-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:48:59.265741    7738 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:48:59.265989    7738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:59.265993    7738 out.go:358] Setting ErrFile to fd 2...
	I1008 10:48:59.265996    7738 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:48:59.266203    7738 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:48:59.266350    7738 out.go:352] Setting JSON to false
	I1008 10:48:59.266363    7738 mustload.go:65] Loading cluster: ha-500000
	I1008 10:48:59.266407    7738 notify.go:220] Checking for updates...
	I1008 10:48:59.266631    7738 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:48:59.266639    7738 status.go:174] checking status of ha-500000 ...
	I1008 10:48:59.266958    7738 status.go:371] ha-500000 host status = "Stopped" (err=<nil>)
	I1008 10:48:59.266963    7738 status.go:384] host is not running, skipping remaining checks
	I1008 10:48:59.266965    7738 status.go:176] ha-500000 status: &{Name:ha-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:48:59.267963    6907 retry.go:31] will retry after 2.379026361s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: exit status 7 (79.849834ms)

                                                
                                                
-- stdout --
	ha-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:49:01.727163    7741 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:49:01.727399    7741 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:01.727408    7741 out.go:358] Setting ErrFile to fd 2...
	I1008 10:49:01.727411    7741 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:01.727569    7741 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:49:01.727755    7741 out.go:352] Setting JSON to false
	I1008 10:49:01.727768    7741 mustload.go:65] Loading cluster: ha-500000
	I1008 10:49:01.727804    7741 notify.go:220] Checking for updates...
	I1008 10:49:01.728043    7741 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:49:01.728050    7741 status.go:174] checking status of ha-500000 ...
	I1008 10:49:01.728380    7741 status.go:371] ha-500000 host status = "Stopped" (err=<nil>)
	I1008 10:49:01.728385    7741 status.go:384] host is not running, skipping remaining checks
	I1008 10:49:01.728391    7741 status.go:176] ha-500000 status: &{Name:ha-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:49:01.729418    6907 retry.go:31] will retry after 5.338269262s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: exit status 7 (80.740917ms)

                                                
                                                
-- stdout --
	ha-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:49:07.148660    7743 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:49:07.148886    7743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:07.148890    7743 out.go:358] Setting ErrFile to fd 2...
	I1008 10:49:07.148893    7743 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:07.149060    7743 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:49:07.149217    7743 out.go:352] Setting JSON to false
	I1008 10:49:07.149230    7743 mustload.go:65] Loading cluster: ha-500000
	I1008 10:49:07.149278    7743 notify.go:220] Checking for updates...
	I1008 10:49:07.149489    7743 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:49:07.149497    7743 status.go:174] checking status of ha-500000 ...
	I1008 10:49:07.149796    7743 status.go:371] ha-500000 host status = "Stopped" (err=<nil>)
	I1008 10:49:07.149801    7743 status.go:384] host is not running, skipping remaining checks
	I1008 10:49:07.149804    7743 status.go:176] ha-500000 status: &{Name:ha-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:49:07.150809    6907 retry.go:31] will retry after 6.292581851s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: exit status 7 (79.493ms)

                                                
                                                
-- stdout --
	ha-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:49:13.523082    7745 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:49:13.523305    7745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:13.523309    7745 out.go:358] Setting ErrFile to fd 2...
	I1008 10:49:13.523312    7745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:13.523476    7745 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:49:13.523643    7745 out.go:352] Setting JSON to false
	I1008 10:49:13.523655    7745 mustload.go:65] Loading cluster: ha-500000
	I1008 10:49:13.523696    7745 notify.go:220] Checking for updates...
	I1008 10:49:13.523904    7745 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:49:13.523911    7745 status.go:174] checking status of ha-500000 ...
	I1008 10:49:13.524214    7745 status.go:371] ha-500000 host status = "Stopped" (err=<nil>)
	I1008 10:49:13.524219    7745 status.go:384] host is not running, skipping remaining checks
	I1008 10:49:13.524221    7745 status.go:176] ha-500000 status: &{Name:ha-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:49:13.525266    6907 retry.go:31] will retry after 13.795489683s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: exit status 7 (79.28575ms)

                                                
                                                
-- stdout --
	ha-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:49:27.401300    7747 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:49:27.401524    7747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:27.401528    7747 out.go:358] Setting ErrFile to fd 2...
	I1008 10:49:27.401531    7747 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:27.401701    7747 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:49:27.401879    7747 out.go:352] Setting JSON to false
	I1008 10:49:27.401892    7747 mustload.go:65] Loading cluster: ha-500000
	I1008 10:49:27.401929    7747 notify.go:220] Checking for updates...
	I1008 10:49:27.402169    7747 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:49:27.402177    7747 status.go:174] checking status of ha-500000 ...
	I1008 10:49:27.402505    7747 status.go:371] ha-500000 host status = "Stopped" (err=<nil>)
	I1008 10:49:27.402510    7747 status.go:384] host is not running, skipping remaining checks
	I1008 10:49:27.402512    7747 status.go:176] ha-500000 status: &{Name:ha-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:49:27.403531    6907 retry.go:31] will retry after 24.142894193s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: exit status 7 (79.21675ms)

                                                
                                                
-- stdout --
	ha-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:49:51.625782    7752 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:49:51.626016    7752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:51.626020    7752 out.go:358] Setting ErrFile to fd 2...
	I1008 10:49:51.626023    7752 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:51.626184    7752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:49:51.626357    7752 out.go:352] Setting JSON to false
	I1008 10:49:51.626374    7752 mustload.go:65] Loading cluster: ha-500000
	I1008 10:49:51.626418    7752 notify.go:220] Checking for updates...
	I1008 10:49:51.626642    7752 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:49:51.626650    7752 status.go:174] checking status of ha-500000 ...
	I1008 10:49:51.626996    7752 status.go:371] ha-500000 host status = "Stopped" (err=<nil>)
	I1008 10:49:51.627000    7752 status.go:384] host is not running, skipping remaining checks
	I1008 10:49:51.627002    7752 status.go:176] ha-500000 status: &{Name:ha-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (36.516875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (57.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-500000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-500000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-500000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-500000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-500000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-500000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-500000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-500000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (36.141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-500000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-500000 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-500000 -v=7 --alsologtostderr: (1.93876575s)
ha_test.go:469: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-500000 --wait=true -v=7 --alsologtostderr
ha_test.go:469: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-500000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.231006875s)

                                                
                                                
-- stdout --
	* [ha-500000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-500000" primary control-plane node in "ha-500000" cluster
	* Restarting existing qemu2 VM for "ha-500000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-500000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:49:53.797845    7773 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:49:53.798064    7773 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:53.798068    7773 out.go:358] Setting ErrFile to fd 2...
	I1008 10:49:53.798071    7773 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:53.798248    7773 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:49:53.799507    7773 out.go:352] Setting JSON to false
	I1008 10:49:53.819266    7773 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4763,"bootTime":1728405030,"procs":564,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:49:53.819339    7773 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:49:53.823733    7773 out.go:177] * [ha-500000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:49:53.831447    7773 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:49:53.831515    7773 notify.go:220] Checking for updates...
	I1008 10:49:53.838486    7773 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:49:53.841491    7773 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:49:53.844482    7773 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:49:53.847469    7773 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:49:53.850502    7773 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:49:53.852203    7773 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:49:53.852266    7773 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:49:53.856445    7773 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 10:49:53.863297    7773 start.go:297] selected driver: qemu2
	I1008 10:49:53.863303    7773 start.go:901] validating driver "qemu2" against &{Name:ha-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-500000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:49:53.863365    7773 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:49:53.865840    7773 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 10:49:53.865868    7773 cni.go:84] Creating CNI manager for ""
	I1008 10:49:53.865892    7773 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 10:49:53.865937    7773 start.go:340] cluster config:
	{Name:ha-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-500000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:49:53.870386    7773 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:49:53.878469    7773 out.go:177] * Starting "ha-500000" primary control-plane node in "ha-500000" cluster
	I1008 10:49:53.882469    7773 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 10:49:53.882485    7773 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 10:49:53.882496    7773 cache.go:56] Caching tarball of preloaded images
	I1008 10:49:53.882574    7773 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 10:49:53.882579    7773 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 10:49:53.882647    7773 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/ha-500000/config.json ...
	I1008 10:49:53.883101    7773 start.go:360] acquireMachinesLock for ha-500000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:49:53.883150    7773 start.go:364] duration metric: took 42.5µs to acquireMachinesLock for "ha-500000"
	I1008 10:49:53.883158    7773 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:49:53.883161    7773 fix.go:54] fixHost starting: 
	I1008 10:49:53.883286    7773 fix.go:112] recreateIfNeeded on ha-500000: state=Stopped err=<nil>
	W1008 10:49:53.883296    7773 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:49:53.891445    7773 out.go:177] * Restarting existing qemu2 VM for "ha-500000" ...
	I1008 10:49:53.895523    7773 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:49:53.895571    7773 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:a4:a9:51:27:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2
	I1008 10:49:53.897828    7773 main.go:141] libmachine: STDOUT: 
	I1008 10:49:53.897847    7773 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:49:53.897876    7773 fix.go:56] duration metric: took 14.711959ms for fixHost
	I1008 10:49:53.897882    7773 start.go:83] releasing machines lock for "ha-500000", held for 14.727792ms
	W1008 10:49:53.897890    7773 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:49:53.897933    7773 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:49:53.897938    7773 start.go:729] Will try again in 5 seconds ...
	I1008 10:49:58.900077    7773 start.go:360] acquireMachinesLock for ha-500000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:49:58.900470    7773 start.go:364] duration metric: took 297.084µs to acquireMachinesLock for "ha-500000"
	I1008 10:49:58.900590    7773 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:49:58.900612    7773 fix.go:54] fixHost starting: 
	I1008 10:49:58.901283    7773 fix.go:112] recreateIfNeeded on ha-500000: state=Stopped err=<nil>
	W1008 10:49:58.901309    7773 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:49:58.905702    7773 out.go:177] * Restarting existing qemu2 VM for "ha-500000" ...
	I1008 10:49:58.913616    7773 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:49:58.913853    7773 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:a4:a9:51:27:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2
	I1008 10:49:58.923736    7773 main.go:141] libmachine: STDOUT: 
	I1008 10:49:58.923793    7773 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:49:58.923864    7773 fix.go:56] duration metric: took 23.253125ms for fixHost
	I1008 10:49:58.923888    7773 start.go:83] releasing machines lock for "ha-500000", held for 23.392375ms
	W1008 10:49:58.924107    7773 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-500000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-500000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:49:58.932581    7773 out.go:201] 
	W1008 10:49:58.936721    7773 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:49:58.936744    7773 out.go:270] * 
	* 
	W1008 10:49:58.939244    7773 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:49:58.947484    7773 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-500000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-500000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (36.352292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (7.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 node delete m03 -v=7 --alsologtostderr: exit status 83 (44.104459ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-500000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-500000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:49:59.105320    7785 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:49:59.105523    7785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:59.105526    7785 out.go:358] Setting ErrFile to fd 2...
	I1008 10:49:59.105529    7785 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:59.105642    7785 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:49:59.105890    7785 mustload.go:65] Loading cluster: ha-500000
	I1008 10:49:59.106099    7785 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:49:59.110213    7785 out.go:177] * The control-plane node ha-500000 host is not running: state=Stopped
	I1008 10:49:59.113249    7785 out.go:177]   To start a cluster, run: "minikube start -p ha-500000"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-500000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
ha_test.go:495: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: exit status 7 (34.187542ms)

                                                
                                                
-- stdout --
	ha-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:49:59.149521    7787 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:49:59.149705    7787 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:59.149708    7787 out.go:358] Setting ErrFile to fd 2...
	I1008 10:49:59.149711    7787 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:49:59.149852    7787 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:49:59.149978    7787 out.go:352] Setting JSON to false
	I1008 10:49:59.149990    7787 mustload.go:65] Loading cluster: ha-500000
	I1008 10:49:59.150058    7787 notify.go:220] Checking for updates...
	I1008 10:49:59.150193    7787 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:49:59.150200    7787 status.go:174] checking status of ha-500000 ...
	I1008 10:49:59.150445    7787 status.go:371] ha-500000 host status = "Stopped" (err=<nil>)
	I1008 10:49:59.150449    7787 status.go:384] host is not running, skipping remaining checks
	I1008 10:49:59.150454    7787 status.go:176] ha-500000 status: &{Name:ha-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (35.149542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-500000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-500000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-500000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-500000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (34.40975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-darwin-arm64 -p ha-500000 stop -v=7 --alsologtostderr: (3.16485625s)
ha_test.go:539: (dbg) Run:  out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr: exit status 7 (75.971041ms)

                                                
                                                
-- stdout --
	ha-500000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:50:02.510795    7816 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:50:02.511022    7816 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:50:02.511026    7816 out.go:358] Setting ErrFile to fd 2...
	I1008 10:50:02.511029    7816 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:50:02.511178    7816 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:50:02.511323    7816 out.go:352] Setting JSON to false
	I1008 10:50:02.511334    7816 mustload.go:65] Loading cluster: ha-500000
	I1008 10:50:02.511374    7816 notify.go:220] Checking for updates...
	I1008 10:50:02.511601    7816 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:50:02.511609    7816 status.go:174] checking status of ha-500000 ...
	I1008 10:50:02.511903    7816 status.go:371] ha-500000 host status = "Stopped" (err=<nil>)
	I1008 10:50:02.511908    7816 status.go:384] host is not running, skipping remaining checks
	I1008 10:50:02.511910    7816 status.go:176] ha-500000 status: &{Name:ha-500000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": ha-500000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": ha-500000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-500000 status -v=7 --alsologtostderr": ha-500000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (36.083375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-500000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:562: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-500000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.186256958s)

                                                
                                                
-- stdout --
	* [ha-500000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-500000" primary control-plane node in "ha-500000" cluster
	* Restarting existing qemu2 VM for "ha-500000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-500000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:50:02.581319    7820 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:50:02.581476    7820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:50:02.581480    7820 out.go:358] Setting ErrFile to fd 2...
	I1008 10:50:02.581482    7820 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:50:02.581609    7820 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:50:02.582673    7820 out.go:352] Setting JSON to false
	I1008 10:50:02.600407    7820 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4772,"bootTime":1728405030,"procs":561,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:50:02.600504    7820 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:50:02.605837    7820 out.go:177] * [ha-500000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:50:02.608833    7820 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:50:02.608865    7820 notify.go:220] Checking for updates...
	I1008 10:50:02.615755    7820 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:50:02.618764    7820 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:50:02.621650    7820 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:50:02.624734    7820 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:50:02.627762    7820 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:50:02.629337    7820 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:50:02.629611    7820 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:50:02.633783    7820 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 10:50:02.640601    7820 start.go:297] selected driver: qemu2
	I1008 10:50:02.640607    7820 start.go:901] validating driver "qemu2" against &{Name:ha-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.1 ClusterName:ha-500000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:50:02.640658    7820 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:50:02.643172    7820 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 10:50:02.643199    7820 cni.go:84] Creating CNI manager for ""
	I1008 10:50:02.643222    7820 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 10:50:02.643271    7820 start.go:340] cluster config:
	{Name:ha-500000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:ha-500000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:50:02.647655    7820 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:50:02.655799    7820 out.go:177] * Starting "ha-500000" primary control-plane node in "ha-500000" cluster
	I1008 10:50:02.659701    7820 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 10:50:02.659715    7820 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 10:50:02.659728    7820 cache.go:56] Caching tarball of preloaded images
	I1008 10:50:02.659798    7820 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 10:50:02.659804    7820 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 10:50:02.659876    7820 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/ha-500000/config.json ...
	I1008 10:50:02.660288    7820 start.go:360] acquireMachinesLock for ha-500000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:50:02.660327    7820 start.go:364] duration metric: took 29.875µs to acquireMachinesLock for "ha-500000"
	I1008 10:50:02.660338    7820 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:50:02.660342    7820 fix.go:54] fixHost starting: 
	I1008 10:50:02.660458    7820 fix.go:112] recreateIfNeeded on ha-500000: state=Stopped err=<nil>
	W1008 10:50:02.660468    7820 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:50:02.668702    7820 out.go:177] * Restarting existing qemu2 VM for "ha-500000" ...
	I1008 10:50:02.672712    7820 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:50:02.672754    7820 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:a4:a9:51:27:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2
	I1008 10:50:02.675016    7820 main.go:141] libmachine: STDOUT: 
	I1008 10:50:02.675047    7820 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:50:02.675080    7820 fix.go:56] duration metric: took 14.735209ms for fixHost
	I1008 10:50:02.675085    7820 start.go:83] releasing machines lock for "ha-500000", held for 14.7535ms
	W1008 10:50:02.675093    7820 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:50:02.675127    7820 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:50:02.675132    7820 start.go:729] Will try again in 5 seconds ...
	I1008 10:50:07.677308    7820 start.go:360] acquireMachinesLock for ha-500000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:50:07.677808    7820 start.go:364] duration metric: took 437µs to acquireMachinesLock for "ha-500000"
	I1008 10:50:07.677927    7820 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:50:07.677946    7820 fix.go:54] fixHost starting: 
	I1008 10:50:07.678687    7820 fix.go:112] recreateIfNeeded on ha-500000: state=Stopped err=<nil>
	W1008 10:50:07.678720    7820 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:50:07.683178    7820 out.go:177] * Restarting existing qemu2 VM for "ha-500000" ...
	I1008 10:50:07.687108    7820 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:50:07.687330    7820 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d6:a4:a9:51:27:c0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/ha-500000/disk.qcow2
	I1008 10:50:07.697787    7820 main.go:141] libmachine: STDOUT: 
	I1008 10:50:07.697860    7820 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:50:07.697941    7820 fix.go:56] duration metric: took 19.996333ms for fixHost
	I1008 10:50:07.697964    7820 start.go:83] releasing machines lock for "ha-500000", held for 20.13075ms
	W1008 10:50:07.698149    7820 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p ha-500000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-500000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:50:07.707125    7820 out.go:201] 
	W1008 10:50:07.711182    7820 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:50:07.711258    7820 out.go:270] * 
	* 
	W1008 10:50:07.713894    7820 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:50:07.722083    7820 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-500000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (74.08275ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:415: expected profile "ha-500000" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-500000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-500000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount
\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-500000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31
.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuth
Sock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (34.308084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-500000 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-500000 --control-plane -v=7 --alsologtostderr: exit status 83 (45.2375ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-500000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-500000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:50:07.932356    7839 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:50:07.932571    7839 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:50:07.932574    7839 out.go:358] Setting ErrFile to fd 2...
	I1008 10:50:07.932577    7839 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:50:07.932723    7839 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:50:07.932990    7839 mustload.go:65] Loading cluster: ha-500000
	I1008 10:50:07.933229    7839 config.go:182] Loaded profile config "ha-500000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:50:07.937943    7839 out.go:177] * The control-plane node ha-500000 host is not running: state=Stopped
	I1008 10:50:07.940944    7839 out.go:177]   To start a cluster, run: "minikube start -p ha-500000"

                                                
                                                
** /stderr **
ha_test.go:609: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-500000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (34.43375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:305: expected profile "ha-500000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-500000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-500000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerP
ort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-500000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\",\"ContainerR
untime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSH
AgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:309: expected profile "ha-500000" in json of 'profile list' to have "HAppy" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-500000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-500000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"ha-500000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.1\
",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSoc
k\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-500000 -n ha-500000: exit status 7 (34.206042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-500000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.09s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-851000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-851000 --driver=qemu2 : exit status 80 (9.8190125s)

                                                
                                                
-- stdout --
	* [image-851000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-851000" primary control-plane node in "image-851000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-851000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-851000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-851000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-851000 -n image-851000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-851000 -n image-851000: exit status 7 (73.647209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-851000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.85s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-113000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-113000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.846717291s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"58f2c8e9-1804-4bc8-8073-7e0c8483d3a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-113000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6b9eb28e-9e4d-4547-9e76-031e6bcdf244","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19774"}}
	{"specversion":"1.0","id":"adfa759b-3242-4b78-bdf6-42389e1c1e88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig"}}
	{"specversion":"1.0","id":"dedfdbdf-1326-4ea1-a4dd-79c38e3c8828","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"9f4e2c7b-2600-4954-bcc6-647b65a05722","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"14526b4e-9764-4c24-8b26-3dda5575225b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube"}}
	{"specversion":"1.0","id":"92b9777e-ee0b-403d-a0d7-cc23d99b1a5d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"6aa6e726-a7f3-4f39-b589-2bc80743b361","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"55c3cb4c-7f5e-4ea5-9da5-274d00273fd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"85a555e7-6b8b-4af0-bdcf-090d052b4d5b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-113000\" primary control-plane node in \"json-output-113000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"0cdf2bb9-42fb-42f2-bade-d44f2e9fac59","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"c68e89b7-18a1-44a1-b543-916ebdc3506c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-113000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"77ce23b2-8eab-44cb-8865-b639ebb9537d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"00461e35-d949-454d-b366-b8c295e5c06d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"ca51e18e-cd73-4ddf-8da4-c959da26f8bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-113000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"3878e808-babb-491f-a510-05b9c3b5095f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"b9be9695-dafd-426b-bf07-350bf07455eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-113000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.85s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.09s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-113000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-113000 --output=json --user=testUser: exit status 83 (84.937875ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"177dca9d-eca4-4704-8cdb-a034fa8398f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-113000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"2fee815c-759c-45f1-8ac8-dd54df9e90aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-113000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-113000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.09s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-113000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-113000 --output=json --user=testUser: exit status 83 (48.444833ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-113000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-113000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-113000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-113000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.1s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-617000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-617000 --driver=qemu2 : exit status 80 (9.787749958s)

                                                
                                                
-- stdout --
	* [first-617000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-617000" primary control-plane node in "first-617000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-617000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-617000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-617000 --driver=qemu2 ": exit status 80
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-08 10:50:41.347963 -0700 PDT m=+511.913705709
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-618000 -n second-618000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-618000 -n second-618000: exit status 85 (85.014167ms)

                                                
                                                
-- stdout --
	* Profile "second-618000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-618000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-618000" host is not running, skipping log retrieval (state="* Profile \"second-618000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-618000\"")
helpers_test.go:175: Cleaning up "second-618000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-618000
panic.go:629: *** TestMinikubeProfile FAILED at 2024-10-08 10:50:41.549038 -0700 PDT m=+512.114780459
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-617000 -n first-617000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-617000 -n first-617000: exit status 7 (34.732666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-617000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-617000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-617000
--- FAIL: TestMinikubeProfile (10.10s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-102000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-102000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.578329042s)

                                                
                                                
-- stdout --
	* [mount-start-1-102000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-102000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-102000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-102000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-102000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-102000 -n mount-start-1-102000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-102000 -n mount-start-1-102000: exit status 7 (73.336292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-102000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.65s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-437000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-437000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (9.978242667s)

                                                
                                                
-- stdout --
	* [multinode-437000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-437000" primary control-plane node in "multinode-437000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-437000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:50:52.543317    7984 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:50:52.543469    7984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:50:52.543473    7984 out.go:358] Setting ErrFile to fd 2...
	I1008 10:50:52.543475    7984 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:50:52.543591    7984 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:50:52.544741    7984 out.go:352] Setting JSON to false
	I1008 10:50:52.562688    7984 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4822,"bootTime":1728405030,"procs":566,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:50:52.562756    7984 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:50:52.567494    7984 out.go:177] * [multinode-437000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:50:52.575515    7984 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:50:52.575591    7984 notify.go:220] Checking for updates...
	I1008 10:50:52.583466    7984 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:50:52.586506    7984 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:50:52.589474    7984 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:50:52.592485    7984 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:50:52.595485    7984 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:50:52.598666    7984 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:50:52.602449    7984 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 10:50:52.608431    7984 start.go:297] selected driver: qemu2
	I1008 10:50:52.608438    7984 start.go:901] validating driver "qemu2" against <nil>
	I1008 10:50:52.608446    7984 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:50:52.610909    7984 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 10:50:52.614473    7984 out.go:177] * Automatically selected the socket_vmnet network
	I1008 10:50:52.617542    7984 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 10:50:52.617581    7984 cni.go:84] Creating CNI manager for ""
	I1008 10:50:52.617600    7984 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1008 10:50:52.617604    7984 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 10:50:52.617660    7984 start.go:340] cluster config:
	{Name:multinode-437000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-437000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:50:52.622362    7984 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:50:52.630453    7984 out.go:177] * Starting "multinode-437000" primary control-plane node in "multinode-437000" cluster
	I1008 10:50:52.634510    7984 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 10:50:52.634524    7984 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 10:50:52.634533    7984 cache.go:56] Caching tarball of preloaded images
	I1008 10:50:52.634607    7984 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 10:50:52.634613    7984 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 10:50:52.634861    7984 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/multinode-437000/config.json ...
	I1008 10:50:52.634873    7984 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/multinode-437000/config.json: {Name:mk4a74382be953d191fadaf540ef72a34cf499e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:50:52.635243    7984 start.go:360] acquireMachinesLock for multinode-437000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:50:52.635296    7984 start.go:364] duration metric: took 47.208µs to acquireMachinesLock for "multinode-437000"
	I1008 10:50:52.635307    7984 start.go:93] Provisioning new machine with config: &{Name:multinode-437000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-437000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 10:50:52.635335    7984 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 10:50:52.638510    7984 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 10:50:52.656184    7984 start.go:159] libmachine.API.Create for "multinode-437000" (driver="qemu2")
	I1008 10:50:52.656211    7984 client.go:168] LocalClient.Create starting
	I1008 10:50:52.656287    7984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 10:50:52.656327    7984 main.go:141] libmachine: Decoding PEM data...
	I1008 10:50:52.656342    7984 main.go:141] libmachine: Parsing certificate...
	I1008 10:50:52.656391    7984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 10:50:52.656422    7984 main.go:141] libmachine: Decoding PEM data...
	I1008 10:50:52.656431    7984 main.go:141] libmachine: Parsing certificate...
	I1008 10:50:52.656871    7984 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 10:50:52.803045    7984 main.go:141] libmachine: Creating SSH key...
	I1008 10:50:52.940860    7984 main.go:141] libmachine: Creating Disk image...
	I1008 10:50:52.940868    7984 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 10:50:52.941089    7984 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2
	I1008 10:50:52.950817    7984 main.go:141] libmachine: STDOUT: 
	I1008 10:50:52.950838    7984 main.go:141] libmachine: STDERR: 
	I1008 10:50:52.950891    7984 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2 +20000M
	I1008 10:50:52.959404    7984 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 10:50:52.959417    7984 main.go:141] libmachine: STDERR: 
	I1008 10:50:52.959434    7984 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2
	I1008 10:50:52.959439    7984 main.go:141] libmachine: Starting QEMU VM...
	I1008 10:50:52.959453    7984 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:50:52.959479    7984 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/qemu.pid -device virtio-net-pci,netdev=net0,mac=aa:7d:51:af:a7:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2
	I1008 10:50:52.961355    7984 main.go:141] libmachine: STDOUT: 
	I1008 10:50:52.961368    7984 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:50:52.961387    7984 client.go:171] duration metric: took 305.169417ms to LocalClient.Create
	I1008 10:50:54.963590    7984 start.go:128] duration metric: took 2.328232166s to createHost
	I1008 10:50:54.963670    7984 start.go:83] releasing machines lock for "multinode-437000", held for 2.328369042s
	W1008 10:50:54.963800    7984 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:50:54.978807    7984 out.go:177] * Deleting "multinode-437000" in qemu2 ...
	W1008 10:50:55.002783    7984 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:50:55.002823    7984 start.go:729] Will try again in 5 seconds ...
	I1008 10:51:00.005249    7984 start.go:360] acquireMachinesLock for multinode-437000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:51:00.005842    7984 start.go:364] duration metric: took 512.291µs to acquireMachinesLock for "multinode-437000"
	I1008 10:51:00.005933    7984 start.go:93] Provisioning new machine with config: &{Name:multinode-437000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.1 ClusterName:multinode-437000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 10:51:00.006135    7984 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 10:51:00.017682    7984 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 10:51:00.065704    7984 start.go:159] libmachine.API.Create for "multinode-437000" (driver="qemu2")
	I1008 10:51:00.065756    7984 client.go:168] LocalClient.Create starting
	I1008 10:51:00.065903    7984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 10:51:00.065983    7984 main.go:141] libmachine: Decoding PEM data...
	I1008 10:51:00.066000    7984 main.go:141] libmachine: Parsing certificate...
	I1008 10:51:00.066058    7984 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 10:51:00.066115    7984 main.go:141] libmachine: Decoding PEM data...
	I1008 10:51:00.066129    7984 main.go:141] libmachine: Parsing certificate...
	I1008 10:51:00.066719    7984 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 10:51:00.226263    7984 main.go:141] libmachine: Creating SSH key...
	I1008 10:51:00.423731    7984 main.go:141] libmachine: Creating Disk image...
	I1008 10:51:00.423741    7984 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 10:51:00.423987    7984 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2
	I1008 10:51:00.434440    7984 main.go:141] libmachine: STDOUT: 
	I1008 10:51:00.434461    7984 main.go:141] libmachine: STDERR: 
	I1008 10:51:00.434517    7984 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2 +20000M
	I1008 10:51:00.443075    7984 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 10:51:00.443094    7984 main.go:141] libmachine: STDERR: 
	I1008 10:51:00.443105    7984 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2
	I1008 10:51:00.443111    7984 main.go:141] libmachine: Starting QEMU VM...
	I1008 10:51:00.443122    7984 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:51:00.443145    7984 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:8e:7e:f4:34:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2
	I1008 10:51:00.444956    7984 main.go:141] libmachine: STDOUT: 
	I1008 10:51:00.444982    7984 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:51:00.444995    7984 client.go:171] duration metric: took 379.235416ms to LocalClient.Create
	I1008 10:51:02.447185    7984 start.go:128] duration metric: took 2.4410265s to createHost
	I1008 10:51:02.447291    7984 start.go:83] releasing machines lock for "multinode-437000", held for 2.441428667s
	W1008 10:51:02.447659    7984 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-437000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-437000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:51:02.458223    7984 out.go:201] 
	W1008 10:51:02.462222    7984 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:51:02.462273    7984 out.go:270] * 
	* 
	W1008 10:51:02.464734    7984 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:51:02.475201    7984 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-437000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (74.331583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (84.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (63.429875ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-437000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- rollout status deployment/busybox: exit status 1 (62.026958ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (61.545709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:51:02.752991    6907 retry.go:31] will retry after 534.545648ms: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.251291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:51:03.399108    6907 retry.go:31] will retry after 1.101357694s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.427041ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:51:04.611320    6907 retry.go:31] will retry after 2.078078046s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.387458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:51:06.800086    6907 retry.go:31] will retry after 2.879573156s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.847209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:51:09.791801    6907 retry.go:31] will retry after 5.393891818s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.80975ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:51:15.296951    6907 retry.go:31] will retry after 10.184368291s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.863166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:51:25.591627    6907 retry.go:31] will retry after 11.751107823s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (109.745208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:51:37.454584    6907 retry.go:31] will retry after 21.182298241s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (107.936208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
I1008 10:51:58.747424    6907 retry.go:31] will retry after 28.117964314s: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.910042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (61.272875ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- exec  -- nslookup kubernetes.io: exit status 1 (61.354667ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- exec  -- nslookup kubernetes.default: exit status 1 (61.390209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (62.105458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (34.55925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (84.69s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-437000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (62.443ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (34.484042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-437000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-437000 -v 3 --alsologtostderr: exit status 83 (48.441ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-437000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-437000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:52:27.387334    8064 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:52:27.387533    8064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:27.387536    8064 out.go:358] Setting ErrFile to fd 2...
	I1008 10:52:27.387538    8064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:27.387675    8064 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:52:27.387922    8064 mustload.go:65] Loading cluster: multinode-437000
	I1008 10:52:27.388150    8064 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:52:27.394871    8064 out.go:177] * The control-plane node multinode-437000 host is not running: state=Stopped
	I1008 10:52:27.397900    8064 out.go:177]   To start a cluster, run: "minikube start -p multinode-437000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-437000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (34.182041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-437000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-437000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (27.492333ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-437000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-437000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-437000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (34.6635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-437000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-437000\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"multinode-437000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMN
UMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.1\",\"ClusterName\":\"multinode-437000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVe
rsion\":\"v1.31.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\"
:\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (33.844083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status --output json --alsologtostderr: exit status 7 (34.856958ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-437000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:52:27.620711    8076 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:52:27.620902    8076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:27.620905    8076 out.go:358] Setting ErrFile to fd 2...
	I1008 10:52:27.620907    8076 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:27.621051    8076 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:52:27.621195    8076 out.go:352] Setting JSON to true
	I1008 10:52:27.621205    8076 mustload.go:65] Loading cluster: multinode-437000
	I1008 10:52:27.621266    8076 notify.go:220] Checking for updates...
	I1008 10:52:27.621423    8076 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:52:27.621430    8076 status.go:174] checking status of multinode-437000 ...
	I1008 10:52:27.621675    8076 status.go:371] multinode-437000 host status = "Stopped" (err=<nil>)
	I1008 10:52:27.621679    8076 status.go:384] host is not running, skipping remaining checks
	I1008 10:52:27.621681    8076 status.go:176] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-437000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cluster.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (34.275416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 node stop m03: exit status 85 (49.777209ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-437000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status: exit status 7 (34.802083ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr: exit status 7 (34.191167ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:52:27.774661    8084 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:52:27.774831    8084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:27.774839    8084 out.go:358] Setting ErrFile to fd 2...
	I1008 10:52:27.774842    8084 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:27.774978    8084 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:52:27.775104    8084 out.go:352] Setting JSON to false
	I1008 10:52:27.775114    8084 mustload.go:65] Loading cluster: multinode-437000
	I1008 10:52:27.775175    8084 notify.go:220] Checking for updates...
	I1008 10:52:27.775375    8084 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:52:27.775382    8084 status.go:174] checking status of multinode-437000 ...
	I1008 10:52:27.775626    8084 status.go:371] multinode-437000 host status = "Stopped" (err=<nil>)
	I1008 10:52:27.775629    8084 status.go:384] host is not running, skipping remaining checks
	I1008 10:52:27.775631    8084 status.go:176] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr": multinode-437000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (34.167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (43.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 node start m03 -v=7 --alsologtostderr: exit status 85 (53.295667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:52:27.845645    8088 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:52:27.845879    8088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:27.845882    8088 out.go:358] Setting ErrFile to fd 2...
	I1008 10:52:27.845884    8088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:27.846042    8088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:52:27.846271    8088 mustload.go:65] Loading cluster: multinode-437000
	I1008 10:52:27.846493    8088 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:52:27.850364    8088 out.go:201] 
	W1008 10:52:27.853447    8088 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W1008 10:52:27.853452    8088 out.go:270] * 
	* 
	W1008 10:52:27.855318    8088 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:52:27.859473    8088 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:284: I1008 10:52:27.845645    8088 out.go:345] Setting OutFile to fd 1 ...
I1008 10:52:27.845879    8088 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:52:27.845882    8088 out.go:358] Setting ErrFile to fd 2...
I1008 10:52:27.845884    8088 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1008 10:52:27.846042    8088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
I1008 10:52:27.846271    8088 mustload.go:65] Loading cluster: multinode-437000
I1008 10:52:27.846493    8088 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I1008 10:52:27.850364    8088 out.go:201] 
W1008 10:52:27.853447    8088 out.go:270] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W1008 10:52:27.853452    8088 out.go:270] * 
* 
W1008 10:52:27.855318    8088 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I1008 10:52:27.859473    8088 out.go:201] 

                                                
                                                
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-437000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (33.915292ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:52:27.896500    8090 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:52:27.896683    8090 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:27.896690    8090 out.go:358] Setting ErrFile to fd 2...
	I1008 10:52:27.896692    8090 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:27.896801    8090 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:52:27.896927    8090 out.go:352] Setting JSON to false
	I1008 10:52:27.896937    8090 mustload.go:65] Loading cluster: multinode-437000
	I1008 10:52:27.896995    8090 notify.go:220] Checking for updates...
	I1008 10:52:27.897161    8090 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:52:27.897168    8090 status.go:174] checking status of multinode-437000 ...
	I1008 10:52:27.897419    8090 status.go:371] multinode-437000 host status = "Stopped" (err=<nil>)
	I1008 10:52:27.897422    8090 status.go:384] host is not running, skipping remaining checks
	I1008 10:52:27.897424    8090 status.go:176] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:52:27.898335    6907 retry.go:31] will retry after 880.451565ms: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (80.302375ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:52:28.858179    8092 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:52:28.858396    8092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:28.858400    8092 out.go:358] Setting ErrFile to fd 2...
	I1008 10:52:28.858403    8092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:28.858582    8092 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:52:28.858739    8092 out.go:352] Setting JSON to false
	I1008 10:52:28.858752    8092 mustload.go:65] Loading cluster: multinode-437000
	I1008 10:52:28.858788    8092 notify.go:220] Checking for updates...
	I1008 10:52:28.859950    8092 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:52:28.860039    8092 status.go:174] checking status of multinode-437000 ...
	I1008 10:52:28.860348    8092 status.go:371] multinode-437000 host status = "Stopped" (err=<nil>)
	I1008 10:52:28.860354    8092 status.go:384] host is not running, skipping remaining checks
	I1008 10:52:28.860357    8092 status.go:176] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:52:28.861477    6907 retry.go:31] will retry after 1.670989461s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (78.075458ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:52:30.610796    8094 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:52:30.611010    8094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:30.611014    8094 out.go:358] Setting ErrFile to fd 2...
	I1008 10:52:30.611017    8094 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:30.611182    8094 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:52:30.611339    8094 out.go:352] Setting JSON to false
	I1008 10:52:30.611351    8094 mustload.go:65] Loading cluster: multinode-437000
	I1008 10:52:30.611394    8094 notify.go:220] Checking for updates...
	I1008 10:52:30.611615    8094 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:52:30.611623    8094 status.go:174] checking status of multinode-437000 ...
	I1008 10:52:30.611910    8094 status.go:371] multinode-437000 host status = "Stopped" (err=<nil>)
	I1008 10:52:30.611914    8094 status.go:384] host is not running, skipping remaining checks
	I1008 10:52:30.611917    8094 status.go:176] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:52:30.612892    6907 retry.go:31] will retry after 1.883415267s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (78.456958ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:52:32.575154    8096 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:52:32.575351    8096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:32.575355    8096 out.go:358] Setting ErrFile to fd 2...
	I1008 10:52:32.575358    8096 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:32.575514    8096 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:52:32.575664    8096 out.go:352] Setting JSON to false
	I1008 10:52:32.575676    8096 mustload.go:65] Loading cluster: multinode-437000
	I1008 10:52:32.575712    8096 notify.go:220] Checking for updates...
	I1008 10:52:32.575921    8096 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:52:32.575929    8096 status.go:174] checking status of multinode-437000 ...
	I1008 10:52:32.576216    8096 status.go:371] multinode-437000 host status = "Stopped" (err=<nil>)
	I1008 10:52:32.576221    8096 status.go:384] host is not running, skipping remaining checks
	I1008 10:52:32.576224    8096 status.go:176] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:52:32.577255    6907 retry.go:31] will retry after 4.820337552s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (79.169125ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:52:37.477109    8101 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:52:37.477330    8101 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:37.477334    8101 out.go:358] Setting ErrFile to fd 2...
	I1008 10:52:37.477337    8101 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:37.477508    8101 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:52:37.477674    8101 out.go:352] Setting JSON to false
	I1008 10:52:37.477686    8101 mustload.go:65] Loading cluster: multinode-437000
	I1008 10:52:37.477726    8101 notify.go:220] Checking for updates...
	I1008 10:52:37.477945    8101 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:52:37.477953    8101 status.go:174] checking status of multinode-437000 ...
	I1008 10:52:37.478254    8101 status.go:371] multinode-437000 host status = "Stopped" (err=<nil>)
	I1008 10:52:37.478259    8101 status.go:384] host is not running, skipping remaining checks
	I1008 10:52:37.478261    8101 status.go:176] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:52:37.479238    6907 retry.go:31] will retry after 6.759772667s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (80.379291ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:52:44.319581    8103 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:52:44.319790    8103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:44.319794    8103 out.go:358] Setting ErrFile to fd 2...
	I1008 10:52:44.319797    8103 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:44.319956    8103 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:52:44.320113    8103 out.go:352] Setting JSON to false
	I1008 10:52:44.320125    8103 mustload.go:65] Loading cluster: multinode-437000
	I1008 10:52:44.320166    8103 notify.go:220] Checking for updates...
	I1008 10:52:44.320403    8103 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:52:44.320414    8103 status.go:174] checking status of multinode-437000 ...
	I1008 10:52:44.320713    8103 status.go:371] multinode-437000 host status = "Stopped" (err=<nil>)
	I1008 10:52:44.320718    8103 status.go:384] host is not running, skipping remaining checks
	I1008 10:52:44.320720    8103 status.go:176] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:52:44.321755    6907 retry.go:31] will retry after 5.317846183s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (80.586167ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:52:49.720339    8105 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:52:49.720572    8105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:49.720576    8105 out.go:358] Setting ErrFile to fd 2...
	I1008 10:52:49.720579    8105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:49.720739    8105 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:52:49.720896    8105 out.go:352] Setting JSON to false
	I1008 10:52:49.720908    8105 mustload.go:65] Loading cluster: multinode-437000
	I1008 10:52:49.720941    8105 notify.go:220] Checking for updates...
	I1008 10:52:49.721147    8105 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:52:49.721155    8105 status.go:174] checking status of multinode-437000 ...
	I1008 10:52:49.721459    8105 status.go:371] multinode-437000 host status = "Stopped" (err=<nil>)
	I1008 10:52:49.721464    8105 status.go:384] host is not running, skipping remaining checks
	I1008 10:52:49.721466    8105 status.go:176] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:52:49.722537    6907 retry.go:31] will retry after 7.77504473s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (78.514583ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:52:57.576418    8110 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:52:57.576647    8110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:57.576652    8110 out.go:358] Setting ErrFile to fd 2...
	I1008 10:52:57.576655    8110 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:52:57.576816    8110 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:52:57.576972    8110 out.go:352] Setting JSON to false
	I1008 10:52:57.576985    8110 mustload.go:65] Loading cluster: multinode-437000
	I1008 10:52:57.577021    8110 notify.go:220] Checking for updates...
	I1008 10:52:57.577253    8110 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:52:57.577261    8110 status.go:174] checking status of multinode-437000 ...
	I1008 10:52:57.577568    8110 status.go:371] multinode-437000 host status = "Stopped" (err=<nil>)
	I1008 10:52:57.577572    8110 status.go:384] host is not running, skipping remaining checks
	I1008 10:52:57.577575    8110 status.go:176] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I1008 10:52:57.578581    6907 retry.go:31] will retry after 14.064662517s: exit status 7
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr: exit status 7 (79.797833ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:53:11.723350    8112 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:53:11.723554    8112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:53:11.723558    8112 out.go:358] Setting ErrFile to fd 2...
	I1008 10:53:11.723561    8112 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:53:11.723720    8112 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:53:11.723874    8112 out.go:352] Setting JSON to false
	I1008 10:53:11.723885    8112 mustload.go:65] Loading cluster: multinode-437000
	I1008 10:53:11.723921    8112 notify.go:220] Checking for updates...
	I1008 10:53:11.724159    8112 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:53:11.724172    8112 status.go:174] checking status of multinode-437000 ...
	I1008 10:53:11.724474    8112 status.go:371] multinode-437000 host status = "Stopped" (err=<nil>)
	I1008 10:53:11.724478    8112 status.go:384] host is not running, skipping remaining checks
	I1008 10:53:11.724481    8112 status.go:176] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-437000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (35.978042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (43.95s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (7.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-437000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-437000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-437000: (1.872753208s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-437000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-437000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.225439959s)

                                                
                                                
-- stdout --
	* [multinode-437000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-437000" primary control-plane node in "multinode-437000" cluster
	* Restarting existing qemu2 VM for "multinode-437000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-437000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:53:13.734884    8128 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:53:13.735094    8128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:53:13.735098    8128 out.go:358] Setting ErrFile to fd 2...
	I1008 10:53:13.735101    8128 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:53:13.735266    8128 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:53:13.736530    8128 out.go:352] Setting JSON to false
	I1008 10:53:13.756197    8128 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4963,"bootTime":1728405030,"procs":564,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:53:13.756270    8128 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:53:13.761500    8128 out.go:177] * [multinode-437000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:53:13.768330    8128 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:53:13.768387    8128 notify.go:220] Checking for updates...
	I1008 10:53:13.774520    8128 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:53:13.775729    8128 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:53:13.778446    8128 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:53:13.781488    8128 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:53:13.784510    8128 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:53:13.787818    8128 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:53:13.787881    8128 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:53:13.792452    8128 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 10:53:13.799499    8128 start.go:297] selected driver: qemu2
	I1008 10:53:13.799506    8128 start.go:901] validating driver "qemu2" against &{Name:multinode-437000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-437000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:53:13.799562    8128 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:53:13.802083    8128 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 10:53:13.802112    8128 cni.go:84] Creating CNI manager for ""
	I1008 10:53:13.802136    8128 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 10:53:13.802178    8128 start.go:340] cluster config:
	{Name:multinode-437000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-437000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:53:13.806777    8128 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:53:13.815359    8128 out.go:177] * Starting "multinode-437000" primary control-plane node in "multinode-437000" cluster
	I1008 10:53:13.819466    8128 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 10:53:13.819484    8128 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 10:53:13.819498    8128 cache.go:56] Caching tarball of preloaded images
	I1008 10:53:13.819580    8128 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 10:53:13.819587    8128 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 10:53:13.819650    8128 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/multinode-437000/config.json ...
	I1008 10:53:13.820070    8128 start.go:360] acquireMachinesLock for multinode-437000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:53:13.820128    8128 start.go:364] duration metric: took 51.417µs to acquireMachinesLock for "multinode-437000"
	I1008 10:53:13.820137    8128 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:53:13.820142    8128 fix.go:54] fixHost starting: 
	I1008 10:53:13.820274    8128 fix.go:112] recreateIfNeeded on multinode-437000: state=Stopped err=<nil>
	W1008 10:53:13.820286    8128 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:53:13.823531    8128 out.go:177] * Restarting existing qemu2 VM for "multinode-437000" ...
	I1008 10:53:13.831510    8128 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:53:13.831547    8128 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:8e:7e:f4:34:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2
	I1008 10:53:13.833808    8128 main.go:141] libmachine: STDOUT: 
	I1008 10:53:13.833827    8128 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:53:13.833858    8128 fix.go:56] duration metric: took 13.713333ms for fixHost
	I1008 10:53:13.833864    8128 start.go:83] releasing machines lock for "multinode-437000", held for 13.730791ms
	W1008 10:53:13.833871    8128 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:53:13.833942    8128 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:53:13.833948    8128 start.go:729] Will try again in 5 seconds ...
	I1008 10:53:18.836091    8128 start.go:360] acquireMachinesLock for multinode-437000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:53:18.836525    8128 start.go:364] duration metric: took 327µs to acquireMachinesLock for "multinode-437000"
	I1008 10:53:18.836705    8128 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:53:18.836725    8128 fix.go:54] fixHost starting: 
	I1008 10:53:18.837459    8128 fix.go:112] recreateIfNeeded on multinode-437000: state=Stopped err=<nil>
	W1008 10:53:18.837483    8128 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:53:18.844952    8128 out.go:177] * Restarting existing qemu2 VM for "multinode-437000" ...
	I1008 10:53:18.847974    8128 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:53:18.848189    8128 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:8e:7e:f4:34:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2
	I1008 10:53:18.858372    8128 main.go:141] libmachine: STDOUT: 
	I1008 10:53:18.858462    8128 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:53:18.858563    8128 fix.go:56] duration metric: took 21.838291ms for fixHost
	I1008 10:53:18.858589    8128 start.go:83] releasing machines lock for "multinode-437000", held for 22.005375ms
	W1008 10:53:18.858852    8128 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-437000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-437000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:53:18.866922    8128 out.go:201] 
	W1008 10:53:18.870786    8128 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:53:18.870814    8128 out.go:270] * 
	* 
	W1008 10:53:18.873564    8128 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:53:18.880885    8128 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-437000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-437000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (35.815708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (7.24s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 node delete m03: exit status 83 (45.721667ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-437000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-437000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-437000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr: exit status 7 (35.456125ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:53:19.083978    8142 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:53:19.084177    8142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:53:19.084180    8142 out.go:358] Setting ErrFile to fd 2...
	I1008 10:53:19.084183    8142 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:53:19.084354    8142 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:53:19.084486    8142 out.go:352] Setting JSON to false
	I1008 10:53:19.084500    8142 mustload.go:65] Loading cluster: multinode-437000
	I1008 10:53:19.084562    8142 notify.go:220] Checking for updates...
	I1008 10:53:19.084747    8142 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:53:19.084753    8142 status.go:174] checking status of multinode-437000 ...
	I1008 10:53:19.084993    8142 status.go:371] multinode-437000 host status = "Stopped" (err=<nil>)
	I1008 10:53:19.084997    8142 status.go:384] host is not running, skipping remaining checks
	I1008 10:53:19.084998    8142 status.go:176] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (34.171083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.12s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (4.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-437000 stop: (3.917321709s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status: exit status 7 (71.253459ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr: exit status 7 (35.848042ms)

                                                
                                                
-- stdout --
	multinode-437000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:53:23.143336    8171 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:53:23.143519    8171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:53:23.143522    8171 out.go:358] Setting ErrFile to fd 2...
	I1008 10:53:23.143525    8171 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:53:23.143657    8171 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:53:23.143783    8171 out.go:352] Setting JSON to false
	I1008 10:53:23.143794    8171 mustload.go:65] Loading cluster: multinode-437000
	I1008 10:53:23.143858    8171 notify.go:220] Checking for updates...
	I1008 10:53:23.144004    8171 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:53:23.144010    8171 status.go:174] checking status of multinode-437000 ...
	I1008 10:53:23.144254    8171 status.go:371] multinode-437000 host status = "Stopped" (err=<nil>)
	I1008 10:53:23.144258    8171 status.go:384] host is not running, skipping remaining checks
	I1008 10:53:23.144260    8171 status.go:176] multinode-437000 status: &{Name:multinode-437000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr": multinode-437000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-437000 status --alsologtostderr": multinode-437000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (34.656708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (4.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-437000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-437000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.192546041s)

                                                
                                                
-- stdout --
	* [multinode-437000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-437000" primary control-plane node in "multinode-437000" cluster
	* Restarting existing qemu2 VM for "multinode-437000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-437000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:53:23.211840    8175 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:53:23.212025    8175 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:53:23.212032    8175 out.go:358] Setting ErrFile to fd 2...
	I1008 10:53:23.212035    8175 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:53:23.212175    8175 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:53:23.213541    8175 out.go:352] Setting JSON to false
	I1008 10:53:23.231756    8175 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4973,"bootTime":1728405030,"procs":565,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:53:23.231831    8175 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:53:23.237152    8175 out.go:177] * [multinode-437000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:53:23.243958    8175 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:53:23.244007    8175 notify.go:220] Checking for updates...
	I1008 10:53:23.251093    8175 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:53:23.252328    8175 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:53:23.255068    8175 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:53:23.258079    8175 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:53:23.261135    8175 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:53:23.264358    8175 config.go:182] Loaded profile config "multinode-437000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:53:23.264622    8175 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:53:23.269105    8175 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 10:53:23.276017    8175 start.go:297] selected driver: qemu2
	I1008 10:53:23.276024    8175 start.go:901] validating driver "qemu2" against &{Name:multinode-437000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:multinode-437000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:53:23.276075    8175 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:53:23.278547    8175 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 10:53:23.278570    8175 cni.go:84] Creating CNI manager for ""
	I1008 10:53:23.278592    8175 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1008 10:53:23.278645    8175 start.go:340] cluster config:
	{Name:multinode-437000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:multinode-437000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:53:23.283081    8175 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:53:23.291058    8175 out.go:177] * Starting "multinode-437000" primary control-plane node in "multinode-437000" cluster
	I1008 10:53:23.295130    8175 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 10:53:23.295146    8175 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 10:53:23.295155    8175 cache.go:56] Caching tarball of preloaded images
	I1008 10:53:23.295225    8175 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 10:53:23.295231    8175 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 10:53:23.295288    8175 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/multinode-437000/config.json ...
	I1008 10:53:23.295685    8175 start.go:360] acquireMachinesLock for multinode-437000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:53:23.295718    8175 start.go:364] duration metric: took 26.208µs to acquireMachinesLock for "multinode-437000"
	I1008 10:53:23.295726    8175 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:53:23.295731    8175 fix.go:54] fixHost starting: 
	I1008 10:53:23.295853    8175 fix.go:112] recreateIfNeeded on multinode-437000: state=Stopped err=<nil>
	W1008 10:53:23.295863    8175 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:53:23.299987    8175 out.go:177] * Restarting existing qemu2 VM for "multinode-437000" ...
	I1008 10:53:23.308035    8175 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:53:23.308074    8175 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:8e:7e:f4:34:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2
	I1008 10:53:23.310295    8175 main.go:141] libmachine: STDOUT: 
	I1008 10:53:23.310315    8175 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:53:23.310344    8175 fix.go:56] duration metric: took 14.611083ms for fixHost
	I1008 10:53:23.310348    8175 start.go:83] releasing machines lock for "multinode-437000", held for 14.626208ms
	W1008 10:53:23.310356    8175 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:53:23.310406    8175 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:53:23.310411    8175 start.go:729] Will try again in 5 seconds ...
	I1008 10:53:28.312549    8175 start.go:360] acquireMachinesLock for multinode-437000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:53:28.312919    8175 start.go:364] duration metric: took 299.333µs to acquireMachinesLock for "multinode-437000"
	I1008 10:53:28.313042    8175 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:53:28.313065    8175 fix.go:54] fixHost starting: 
	I1008 10:53:28.313716    8175 fix.go:112] recreateIfNeeded on multinode-437000: state=Stopped err=<nil>
	W1008 10:53:28.313742    8175 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:53:28.322162    8175 out.go:177] * Restarting existing qemu2 VM for "multinode-437000" ...
	I1008 10:53:28.326127    8175 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:53:28.326351    8175 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:8e:7e:f4:34:e2 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/multinode-437000/disk.qcow2
	I1008 10:53:28.336179    8175 main.go:141] libmachine: STDOUT: 
	I1008 10:53:28.336237    8175 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:53:28.336306    8175 fix.go:56] duration metric: took 23.24225ms for fixHost
	I1008 10:53:28.336323    8175 start.go:83] releasing machines lock for "multinode-437000", held for 23.381333ms
	W1008 10:53:28.336467    8175 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p multinode-437000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-437000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:53:28.344126    8175 out.go:201] 
	W1008 10:53:28.348239    8175 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:53:28.348271    8175 out.go:270] * 
	* 
	W1008 10:53:28.351125    8175 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:53:28.359092    8175 out.go:201] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-437000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (74.880041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.27s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-437000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-437000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-437000-m01 --driver=qemu2 : exit status 80 (9.92156775s)

                                                
                                                
-- stdout --
	* [multinode-437000-m01] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-437000-m01" primary control-plane node in "multinode-437000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-437000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-437000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-437000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-437000-m02 --driver=qemu2 : exit status 80 (10.012393458s)

                                                
                                                
-- stdout --
	* [multinode-437000-m02] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-437000-m02" primary control-plane node in "multinode-437000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-437000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-437000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-437000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-437000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-437000: exit status 83 (85.785417ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-437000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-437000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-437000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-437000 -n multinode-437000: exit status 7 (35.131208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-437000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.18s)

                                                
                                    
x
+
TestPreload (10.07s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-534000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-534000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.909279s)

                                                
                                                
-- stdout --
	* [test-preload-534000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-534000" primary control-plane node in "test-preload-534000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-534000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:53:48.767221    8227 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:53:48.767384    8227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:53:48.767388    8227 out.go:358] Setting ErrFile to fd 2...
	I1008 10:53:48.767390    8227 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:53:48.767511    8227 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:53:48.768669    8227 out.go:352] Setting JSON to false
	I1008 10:53:48.786780    8227 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4998,"bootTime":1728405030,"procs":562,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:53:48.786852    8227 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:53:48.792622    8227 out.go:177] * [test-preload-534000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:53:48.800503    8227 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:53:48.800550    8227 notify.go:220] Checking for updates...
	I1008 10:53:48.805996    8227 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:53:48.809513    8227 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:53:48.812537    8227 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:53:48.815575    8227 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:53:48.818563    8227 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:53:48.821955    8227 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:53:48.822011    8227 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:53:48.826560    8227 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 10:53:48.833521    8227 start.go:297] selected driver: qemu2
	I1008 10:53:48.833529    8227 start.go:901] validating driver "qemu2" against <nil>
	I1008 10:53:48.833536    8227 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:53:48.836045    8227 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 10:53:48.839612    8227 out.go:177] * Automatically selected the socket_vmnet network
	I1008 10:53:48.842651    8227 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 10:53:48.842676    8227 cni.go:84] Creating CNI manager for ""
	I1008 10:53:48.842699    8227 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 10:53:48.842703    8227 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 10:53:48.842742    8227 start.go:340] cluster config:
	{Name:test-preload-534000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:53:48.847399    8227 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:53:48.854526    8227 out.go:177] * Starting "test-preload-534000" primary control-plane node in "test-preload-534000" cluster
	I1008 10:53:48.858550    8227 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I1008 10:53:48.858644    8227 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/test-preload-534000/config.json ...
	I1008 10:53:48.858670    8227 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/test-preload-534000/config.json: {Name:mk8065e7a5059c9d44a2998983958a28f379375c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:53:48.858678    8227 cache.go:107] acquiring lock: {Name:mk5604f791a1ef2f4d9ad107fc168a2b664c55e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:53:48.858689    8227 cache.go:107] acquiring lock: {Name:mk90cac23eb27574f21455cd7c728710145d5311 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:53:48.858708    8227 cache.go:107] acquiring lock: {Name:mkd9ce2d14f0a4c42d1dafaad6b4195a366535e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:53:48.858874    8227 cache.go:107] acquiring lock: {Name:mkcea9b50f71f8ccc21fcd281cb5253cb6af1610 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:53:48.858872    8227 cache.go:107] acquiring lock: {Name:mkfa09a837b8ed19a67ae476d030484697f13d78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:53:48.858952    8227 cache.go:107] acquiring lock: {Name:mkd247a14a725c49145e1d07ffaf0f4bc1a655bc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:53:48.858976    8227 cache.go:107] acquiring lock: {Name:mkef43e4c9b3a64ef4951ebd1fa4e771a28bc8ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:53:48.858966    8227 cache.go:107] acquiring lock: {Name:mkd87c9d9ae6e4346a06bb69ee55d5b07aca5390 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:53:48.859069    8227 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1008 10:53:48.859100    8227 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1008 10:53:48.859237    8227 start.go:360] acquireMachinesLock for test-preload-534000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:53:48.859297    8227 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1008 10:53:48.859327    8227 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1008 10:53:48.859346    8227 start.go:364] duration metric: took 98.708µs to acquireMachinesLock for "test-preload-534000"
	I1008 10:53:48.859392    8227 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:53:48.859358    8227 start.go:93] Provisioning new machine with config: &{Name:test-preload-534000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 10:53:48.859426    8227 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 10:53:48.859487    8227 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1008 10:53:48.859562    8227 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:53:48.859578    8227 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1008 10:53:48.863510    8227 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 10:53:48.873799    8227 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1008 10:53:48.873855    8227 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1008 10:53:48.873901    8227 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1008 10:53:48.873957    8227 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1008 10:53:48.873974    8227 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1008 10:53:48.873986    8227 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1008 10:53:48.873789    8227 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:53:48.874175    8227 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:53:48.883563    8227 start.go:159] libmachine.API.Create for "test-preload-534000" (driver="qemu2")
	I1008 10:53:48.883597    8227 client.go:168] LocalClient.Create starting
	I1008 10:53:48.883677    8227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 10:53:48.883712    8227 main.go:141] libmachine: Decoding PEM data...
	I1008 10:53:48.883723    8227 main.go:141] libmachine: Parsing certificate...
	I1008 10:53:48.883764    8227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 10:53:48.883793    8227 main.go:141] libmachine: Decoding PEM data...
	I1008 10:53:48.883800    8227 main.go:141] libmachine: Parsing certificate...
	I1008 10:53:48.884153    8227 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 10:53:49.033509    8227 main.go:141] libmachine: Creating SSH key...
	I1008 10:53:49.247645    8227 main.go:141] libmachine: Creating Disk image...
	I1008 10:53:49.247671    8227 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 10:53:49.247886    8227 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/disk.qcow2
	I1008 10:53:49.258494    8227 main.go:141] libmachine: STDOUT: 
	I1008 10:53:49.258516    8227 main.go:141] libmachine: STDERR: 
	I1008 10:53:49.258571    8227 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/disk.qcow2 +20000M
	I1008 10:53:49.268622    8227 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 10:53:49.268661    8227 main.go:141] libmachine: STDERR: 
	I1008 10:53:49.268706    8227 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/disk.qcow2
	I1008 10:53:49.268713    8227 main.go:141] libmachine: Starting QEMU VM...
	I1008 10:53:49.268731    8227 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:53:49.268776    8227 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a6:08:82:14:8b:01 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/disk.qcow2
	I1008 10:53:49.270768    8227 main.go:141] libmachine: STDOUT: 
	I1008 10:53:49.270787    8227 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:53:49.270814    8227 client.go:171] duration metric: took 387.204417ms to LocalClient.Create
	I1008 10:53:49.293571    8227 cache.go:162] opening:  /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I1008 10:53:49.331364    8227 cache.go:162] opening:  /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1008 10:53:49.377826    8227 cache.go:162] opening:  /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	I1008 10:53:49.449060    8227 cache.go:162] opening:  /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1008 10:53:49.510936    8227 cache.go:162] opening:  /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I1008 10:53:49.552131    8227 cache.go:162] opening:  /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	W1008 10:53:49.636722    8227 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1008 10:53:49.636753    8227 cache.go:162] opening:  /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1008 10:53:49.823417    8227 cache.go:157] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I1008 10:53:49.823463    8227 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 964.540167ms
	I1008 10:53:49.823504    8227 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	W1008 10:53:50.314299    8227 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1008 10:53:50.314407    8227 cache.go:162] opening:  /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1008 10:53:51.217793    8227 cache.go:157] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I1008 10:53:51.217865    8227 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 2.359022167s
	I1008 10:53:51.217897    8227 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I1008 10:53:51.241495    8227 cache.go:157] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1008 10:53:51.241540    8227 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.382867s
	I1008 10:53:51.241565    8227 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1008 10:53:51.271023    8227 start.go:128] duration metric: took 2.411583959s to createHost
	I1008 10:53:51.271073    8227 start.go:83] releasing machines lock for "test-preload-534000", held for 2.411723958s
	W1008 10:53:51.271129    8227 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:53:51.287236    8227 out.go:177] * Deleting "test-preload-534000" in qemu2 ...
	W1008 10:53:51.308082    8227 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:53:51.308118    8227 start.go:729] Will try again in 5 seconds ...
	I1008 10:53:51.762008    8227 cache.go:157] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I1008 10:53:51.762059    8227 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 2.903107708s
	I1008 10:53:51.762085    8227 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I1008 10:53:52.842997    8227 cache.go:157] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I1008 10:53:52.843061    8227 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 3.984230792s
	I1008 10:53:52.843087    8227 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I1008 10:53:54.587420    8227 cache.go:157] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I1008 10:53:54.587470    8227 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 5.728784958s
	I1008 10:53:54.587496    8227 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I1008 10:53:55.666780    8227 cache.go:157] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I1008 10:53:55.666831    8227 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 6.808164708s
	I1008 10:53:55.666882    8227 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I1008 10:53:56.308420    8227 start.go:360] acquireMachinesLock for test-preload-534000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:53:56.308934    8227 start.go:364] duration metric: took 421.208µs to acquireMachinesLock for "test-preload-534000"
	I1008 10:53:56.309058    8227 start.go:93] Provisioning new machine with config: &{Name:test-preload-534000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-534000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 10:53:56.309271    8227 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 10:53:56.315964    8227 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 10:53:56.364297    8227 start.go:159] libmachine.API.Create for "test-preload-534000" (driver="qemu2")
	I1008 10:53:56.364358    8227 client.go:168] LocalClient.Create starting
	I1008 10:53:56.364558    8227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 10:53:56.364655    8227 main.go:141] libmachine: Decoding PEM data...
	I1008 10:53:56.364676    8227 main.go:141] libmachine: Parsing certificate...
	I1008 10:53:56.364759    8227 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 10:53:56.364822    8227 main.go:141] libmachine: Decoding PEM data...
	I1008 10:53:56.364837    8227 main.go:141] libmachine: Parsing certificate...
	I1008 10:53:56.365393    8227 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 10:53:56.522458    8227 main.go:141] libmachine: Creating SSH key...
	I1008 10:53:56.577086    8227 main.go:141] libmachine: Creating Disk image...
	I1008 10:53:56.577092    8227 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 10:53:56.577281    8227 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/disk.qcow2
	I1008 10:53:56.587152    8227 main.go:141] libmachine: STDOUT: 
	I1008 10:53:56.587168    8227 main.go:141] libmachine: STDERR: 
	I1008 10:53:56.587243    8227 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/disk.qcow2 +20000M
	I1008 10:53:56.595940    8227 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 10:53:56.595973    8227 main.go:141] libmachine: STDERR: 
	I1008 10:53:56.595989    8227 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/disk.qcow2
	I1008 10:53:56.595993    8227 main.go:141] libmachine: Starting QEMU VM...
	I1008 10:53:56.596008    8227 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:53:56.596042    8227 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:24:60:dc:71:17 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/test-preload-534000/disk.qcow2
	I1008 10:53:56.597946    8227 main.go:141] libmachine: STDOUT: 
	I1008 10:53:56.597961    8227 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:53:56.597974    8227 client.go:171] duration metric: took 233.60025ms to LocalClient.Create
	I1008 10:53:58.257190    8227 cache.go:157] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I1008 10:53:58.257260    8227 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.398333875s
	I1008 10:53:58.257284    8227 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I1008 10:53:58.257332    8227 cache.go:87] Successfully saved all images to host disk.
	I1008 10:53:58.600171    8227 start.go:128] duration metric: took 2.290871s to createHost
	I1008 10:53:58.600238    8227 start.go:83] releasing machines lock for "test-preload-534000", held for 2.291283s
	W1008 10:53:58.600593    8227 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-534000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-534000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:53:58.613154    8227 out.go:201] 
	W1008 10:53:58.618246    8227 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:53:58.618281    8227 out.go:270] * 
	* 
	W1008 10:53:58.620743    8227 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:53:58.629132    8227 out.go:201] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-534000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:629: *** TestPreload FAILED at 2024-10-08 10:53:58.646904 -0700 PDT m=+709.213141959
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-534000 -n test-preload-534000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-534000 -n test-preload-534000: exit status 7 (71.198041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-534000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-534000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-534000
--- FAIL: TestPreload (10.07s)

                                                
                                    
x
+
TestScheduledStopUnix (10.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-319000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-319000 --memory=2048 --driver=qemu2 : exit status 80 (9.879970709s)

                                                
                                                
-- stdout --
	* [scheduled-stop-319000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-319000" primary control-plane node in "scheduled-stop-319000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-319000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-319000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-319000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-319000" primary control-plane node in "scheduled-stop-319000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-319000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-319000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestScheduledStopUnix FAILED at 2024-10-08 10:54:08.680974 -0700 PDT m=+719.247237084
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-319000 -n scheduled-stop-319000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-319000 -n scheduled-stop-319000: exit status 7 (74.697625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-319000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-319000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-319000
--- FAIL: TestScheduledStopUnix (10.04s)

                                                
                                    
x
+
TestSkaffold (17.52s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2930532126 version
skaffold_test.go:59: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/skaffold.exe2930532126 version: (1.061558875s)
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-718000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-718000 --memory=2600 --driver=qemu2 : exit status 80 (10.070972917s)

                                                
                                                
-- stdout --
	* [skaffold-718000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-718000" primary control-plane node in "skaffold-718000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-718000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-718000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-718000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-718000" primary control-plane node in "skaffold-718000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-718000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-718000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:629: *** TestSkaffold FAILED at 2024-10-08 10:54:26.2096 -0700 PDT m=+736.775907251
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-718000 -n skaffold-718000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-718000 -n skaffold-718000: exit status 7 (68.09975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-718000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-718000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-718000
--- FAIL: TestSkaffold (17.52s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (642.1s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2204185011 start -p running-upgrade-967000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.2204185011 start -p running-upgrade-967000 --memory=2200 --vm-driver=qemu2 : (1m26.227955958s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-967000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-967000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m37.594284708s)

                                                
                                                
-- stdout --
	* [running-upgrade-967000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-967000" primary control-plane node in "running-upgrade-967000" cluster
	* Updating the running qemu2 "running-upgrade-967000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:56:18.239533    8534 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:56:18.239688    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:56:18.239691    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:56:18.239693    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:56:18.239825    8534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:56:18.240858    8534 out.go:352] Setting JSON to false
	I1008 10:56:18.260853    8534 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5148,"bootTime":1728405030,"procs":567,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:56:18.260918    8534 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:56:18.264958    8534 out.go:177] * [running-upgrade-967000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:56:18.272968    8534 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:56:18.273008    8534 notify.go:220] Checking for updates...
	I1008 10:56:18.280984    8534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:56:18.284006    8534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:56:18.286949    8534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:56:18.290030    8534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:56:18.292977    8534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:56:18.296217    8534 config.go:182] Loaded profile config "running-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1008 10:56:18.299985    8534 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1008 10:56:18.302867    8534 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:56:18.306951    8534 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 10:56:18.312921    8534 start.go:297] selected driver: qemu2
	I1008 10:56:18.312927    8534 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51326 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1008 10:56:18.312990    8534 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:56:18.315381    8534 cni.go:84] Creating CNI manager for ""
	I1008 10:56:18.315409    8534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 10:56:18.315439    8534 start.go:340] cluster config:
	{Name:running-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51326 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1008 10:56:18.315491    8534 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:56:18.323958    8534 out.go:177] * Starting "running-upgrade-967000" primary control-plane node in "running-upgrade-967000" cluster
	I1008 10:56:18.327895    8534 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1008 10:56:18.327908    8534 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1008 10:56:18.327918    8534 cache.go:56] Caching tarball of preloaded images
	I1008 10:56:18.327966    8534 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 10:56:18.327971    8534 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1008 10:56:18.328017    8534 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/config.json ...
	I1008 10:56:18.328408    8534 start.go:360] acquireMachinesLock for running-upgrade-967000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:56:28.286891    8534 start.go:364] duration metric: took 9.958487291s to acquireMachinesLock for "running-upgrade-967000"
	I1008 10:56:28.286923    8534 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:56:28.286927    8534 fix.go:54] fixHost starting: 
	I1008 10:56:28.287652    8534 fix.go:112] recreateIfNeeded on running-upgrade-967000: state=Running err=<nil>
	W1008 10:56:28.287659    8534 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:56:28.291276    8534 out.go:177] * Updating the running qemu2 "running-upgrade-967000" VM ...
	I1008 10:56:28.297346    8534 machine.go:93] provisionDockerMachine start ...
	I1008 10:56:28.297449    8534 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.297582    8534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104632480] 0x104634cc0 <nil>  [] 0s} localhost 51233 <nil> <nil>}
	I1008 10:56:28.297586    8534 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 10:56:28.361710    8534 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-967000
	
	I1008 10:56:28.361728    8534 buildroot.go:166] provisioning hostname "running-upgrade-967000"
	I1008 10:56:28.361788    8534 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.361911    8534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104632480] 0x104634cc0 <nil>  [] 0s} localhost 51233 <nil> <nil>}
	I1008 10:56:28.361917    8534 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-967000 && echo "running-upgrade-967000" | sudo tee /etc/hostname
	I1008 10:56:28.429475    8534 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-967000
	
	I1008 10:56:28.429544    8534 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.429660    8534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104632480] 0x104634cc0 <nil>  [] 0s} localhost 51233 <nil> <nil>}
	I1008 10:56:28.429670    8534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-967000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-967000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-967000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 10:56:28.494208    8534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 10:56:28.494224    8534 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19774-6384/.minikube CaCertPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19774-6384/.minikube}
	I1008 10:56:28.494232    8534 buildroot.go:174] setting up certificates
	I1008 10:56:28.494237    8534 provision.go:84] configureAuth start
	I1008 10:56:28.494245    8534 provision.go:143] copyHostCerts
	I1008 10:56:28.494328    8534 exec_runner.go:144] found /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.pem, removing ...
	I1008 10:56:28.494335    8534 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.pem
	I1008 10:56:28.494438    8534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.pem (1078 bytes)
	I1008 10:56:28.494612    8534 exec_runner.go:144] found /Users/jenkins/minikube-integration/19774-6384/.minikube/cert.pem, removing ...
	I1008 10:56:28.494619    8534 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19774-6384/.minikube/cert.pem
	I1008 10:56:28.494661    8534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19774-6384/.minikube/cert.pem (1123 bytes)
	I1008 10:56:28.494769    8534 exec_runner.go:144] found /Users/jenkins/minikube-integration/19774-6384/.minikube/key.pem, removing ...
	I1008 10:56:28.494773    8534 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19774-6384/.minikube/key.pem
	I1008 10:56:28.494810    8534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19774-6384/.minikube/key.pem (1679 bytes)
	I1008 10:56:28.494908    8534 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-967000 san=[127.0.0.1 localhost minikube running-upgrade-967000]
	I1008 10:56:28.605164    8534 provision.go:177] copyRemoteCerts
	I1008 10:56:28.605214    8534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 10:56:28.605222    8534 sshutil.go:53] new ssh client: &{IP:localhost Port:51233 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/running-upgrade-967000/id_rsa Username:docker}
	I1008 10:56:28.639233    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1008 10:56:28.649873    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 10:56:28.657210    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 10:56:28.664490    8534 provision.go:87] duration metric: took 170.247083ms to configureAuth
	I1008 10:56:28.664503    8534 buildroot.go:189] setting minikube options for container-runtime
	I1008 10:56:28.664615    8534 config.go:182] Loaded profile config "running-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1008 10:56:28.664666    8534 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.664756    8534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104632480] 0x104634cc0 <nil>  [] 0s} localhost 51233 <nil> <nil>}
	I1008 10:56:28.664762    8534 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1008 10:56:28.728476    8534 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1008 10:56:28.728489    8534 buildroot.go:70] root file system type: tmpfs
	I1008 10:56:28.728539    8534 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1008 10:56:28.728608    8534 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.728728    8534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104632480] 0x104634cc0 <nil>  [] 0s} localhost 51233 <nil> <nil>}
	I1008 10:56:28.728762    8534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1008 10:56:28.793589    8534 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1008 10:56:28.793657    8534 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.793773    8534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104632480] 0x104634cc0 <nil>  [] 0s} localhost 51233 <nil> <nil>}
	I1008 10:56:28.793781    8534 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1008 10:56:28.860804    8534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 10:56:28.860820    8534 machine.go:96] duration metric: took 563.468791ms to provisionDockerMachine
	I1008 10:56:28.860828    8534 start.go:293] postStartSetup for "running-upgrade-967000" (driver="qemu2")
	I1008 10:56:28.860835    8534 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 10:56:28.860914    8534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 10:56:28.860923    8534 sshutil.go:53] new ssh client: &{IP:localhost Port:51233 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/running-upgrade-967000/id_rsa Username:docker}
	I1008 10:56:28.896921    8534 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 10:56:28.898407    8534 info.go:137] Remote host: Buildroot 2021.02.12
	I1008 10:56:28.898416    8534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19774-6384/.minikube/addons for local assets ...
	I1008 10:56:28.898481    8534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19774-6384/.minikube/files for local assets ...
	I1008 10:56:28.898583    8534 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem -> 69072.pem in /etc/ssl/certs
	I1008 10:56:28.898685    8534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 10:56:28.901233    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem --> /etc/ssl/certs/69072.pem (1708 bytes)
	I1008 10:56:28.908484    8534 start.go:296] duration metric: took 47.650458ms for postStartSetup
	I1008 10:56:28.908498    8534 fix.go:56] duration metric: took 621.572584ms for fixHost
	I1008 10:56:28.908544    8534 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.908654    8534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104632480] 0x104634cc0 <nil>  [] 0s} localhost 51233 <nil> <nil>}
	I1008 10:56:28.908661    8534 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 10:56:28.974540    8534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410188.918847935
	
	I1008 10:56:28.974555    8534 fix.go:216] guest clock: 1728410188.918847935
	I1008 10:56:28.974560    8534 fix.go:229] Guest: 2024-10-08 10:56:28.918847935 -0700 PDT Remote: 2024-10-08 10:56:28.908499 -0700 PDT m=+10.693740085 (delta=10.348935ms)
	I1008 10:56:28.974572    8534 fix.go:200] guest clock delta is within tolerance: 10.348935ms
	I1008 10:56:28.974575    8534 start.go:83] releasing machines lock for "running-upgrade-967000", held for 687.665625ms
	I1008 10:56:28.974669    8534 ssh_runner.go:195] Run: cat /version.json
	I1008 10:56:28.974669    8534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 10:56:28.974680    8534 sshutil.go:53] new ssh client: &{IP:localhost Port:51233 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/running-upgrade-967000/id_rsa Username:docker}
	I1008 10:56:28.974693    8534 sshutil.go:53] new ssh client: &{IP:localhost Port:51233 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/running-upgrade-967000/id_rsa Username:docker}
	W1008 10:56:28.975272    8534 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51233: connect: connection refused
	I1008 10:56:28.975294    8534 retry.go:31] will retry after 265.663688ms: dial tcp [::1]:51233: connect: connection refused
	W1008 10:56:29.274475    8534 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1008 10:56:29.274558    8534 ssh_runner.go:195] Run: systemctl --version
	I1008 10:56:29.276585    8534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 10:56:29.278198    8534 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 10:56:29.278229    8534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1008 10:56:29.281407    8534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1008 10:56:29.285742    8534 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 10:56:29.285749    8534 start.go:495] detecting cgroup driver to use...
	I1008 10:56:29.285824    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 10:56:29.291104    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1008 10:56:29.294412    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1008 10:56:29.297819    8534 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1008 10:56:29.297853    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1008 10:56:29.300999    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 10:56:29.303803    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1008 10:56:29.308018    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 10:56:29.311036    8534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 10:56:29.314056    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1008 10:56:29.317198    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1008 10:56:29.320176    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1008 10:56:29.323044    8534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 10:56:29.326328    8534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 10:56:29.329412    8534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:29.424476    8534 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1008 10:56:29.430607    8534 start.go:495] detecting cgroup driver to use...
	I1008 10:56:29.430688    8534 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1008 10:56:29.436482    8534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 10:56:29.442888    8534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 10:56:29.449075    8534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 10:56:29.454285    8534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1008 10:56:29.462468    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 10:56:29.475289    8534 ssh_runner.go:195] Run: which cri-dockerd
	I1008 10:56:29.476566    8534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1008 10:56:29.479177    8534 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1008 10:56:29.484296    8534 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1008 10:56:29.583696    8534 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1008 10:56:29.672328    8534 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1008 10:56:29.672389    8534 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1008 10:56:29.677748    8534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:29.764069    8534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1008 10:56:32.233109    8534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.469028708s)
	I1008 10:56:32.233188    8534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1008 10:56:32.238484    8534 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1008 10:56:32.246613    8534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1008 10:56:32.253036    8534 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1008 10:56:32.341137    8534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1008 10:56:32.429761    8534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:32.516493    8534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1008 10:56:32.524627    8534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1008 10:56:32.530209    8534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:32.603107    8534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1008 10:56:32.650425    8534 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1008 10:56:32.650525    8534 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1008 10:56:32.653486    8534 start.go:563] Will wait 60s for crictl version
	I1008 10:56:32.653558    8534 ssh_runner.go:195] Run: which crictl
	I1008 10:56:32.655164    8534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 10:56:32.666770    8534 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1008 10:56:32.666850    8534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1008 10:56:32.679899    8534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1008 10:56:32.701887    8534 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1008 10:56:32.701975    8534 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1008 10:56:32.703458    8534 kubeadm.go:883] updating cluster {Name:running-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51326 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1008 10:56:32.703501    8534 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1008 10:56:32.703552    8534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1008 10:56:32.714438    8534 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1008 10:56:32.714447    8534 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1008 10:56:32.714480    8534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1008 10:56:32.717845    8534 ssh_runner.go:195] Run: which lz4
	I1008 10:56:32.719373    8534 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 10:56:32.720857    8534 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 10:56:32.720876    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1008 10:56:33.699897    8534 docker.go:649] duration metric: took 980.562917ms to copy over tarball
	I1008 10:56:33.699975    8534 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 10:56:35.226344    8534 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.5263585s)
	I1008 10:56:35.226358    8534 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 10:56:35.243689    8534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1008 10:56:35.249139    8534 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1008 10:56:35.256162    8534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:35.348130    8534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1008 10:56:36.520078    8534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.171933958s)
	I1008 10:56:36.520172    8534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1008 10:56:36.531369    8534 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1008 10:56:36.531378    8534 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1008 10:56:36.531383    8534 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 10:56:36.535330    8534 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:36.537083    8534 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:36.539286    8534 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:36.539437    8534 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:36.541549    8534 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:36.541956    8534 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:36.543847    8534 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:36.543933    8534 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:36.544084    8534 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:36.545405    8534 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:36.546330    8534 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1008 10:56:36.546556    8534 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:36.548002    8534 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:36.548034    8534 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:36.549075    8534 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1008 10:56:36.550888    8534 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:37.023336    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:37.034409    8534 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1008 10:56:37.034436    8534 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:37.034484    8534 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:37.047582    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1008 10:56:37.059178    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:37.069379    8534 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1008 10:56:37.069408    8534 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:37.069466    8534 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:37.087848    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1008 10:56:37.104557    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:37.116370    8534 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1008 10:56:37.116390    8534 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:37.116447    8534 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:37.132206    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1008 10:56:37.140792    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:37.153508    8534 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1008 10:56:37.153534    8534 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:37.153581    8534 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:37.164642    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1008 10:56:37.260591    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:37.275643    8534 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1008 10:56:37.275665    8534 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:37.275732    8534 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:37.281008    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1008 10:56:37.288187    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1008 10:56:37.288313    8534 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1008 10:56:37.292850    8534 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1008 10:56:37.292875    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1008 10:56:37.293032    8534 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1008 10:56:37.293051    8534 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1008 10:56:37.293094    8534 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1008 10:56:37.365839    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1008 10:56:37.366017    8534 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	W1008 10:56:37.375485    8534 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1008 10:56:37.375628    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	W1008 10:56:37.381430    8534 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1008 10:56:37.381551    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:37.394467    8534 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1008 10:56:37.394499    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1008 10:56:37.453777    8534 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1008 10:56:37.453803    8534 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:37.453866    8534 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1008 10:56:37.453876    8534 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:37.453878    8534 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:37.453910    8534 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:37.461744    8534 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1008 10:56:37.461775    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1008 10:56:37.509775    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1008 10:56:37.509927    8534 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1008 10:56:37.553698    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1008 10:56:37.553835    8534 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1008 10:56:37.630692    8534 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1008 10:56:37.630745    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1008 10:56:37.630748    8534 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1008 10:56:37.630761    8534 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1008 10:56:37.630789    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1008 10:56:37.746169    8534 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1008 10:56:37.746212    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1008 10:56:38.066022    8534 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1008 10:56:38.066050    8534 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1008 10:56:38.066059    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1008 10:56:38.106861    8534 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1008 10:56:38.106886    8534 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1008 10:56:38.106893    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1008 10:56:38.284062    8534 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1008 10:56:38.284102    8534 cache_images.go:92] duration metric: took 1.752717292s to LoadCachedImages
	W1008 10:56:38.284157    8534 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1008 10:56:38.284166    8534 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1008 10:56:38.284227    8534 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-967000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 10:56:38.284310    8534 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1008 10:56:38.306702    8534 cni.go:84] Creating CNI manager for ""
	I1008 10:56:38.306716    8534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 10:56:38.306722    8534 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 10:56:38.306733    8534 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-967000 NodeName:running-upgrade-967000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 10:56:38.306820    8534 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-967000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 10:56:38.306899    8534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1008 10:56:38.315080    8534 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 10:56:38.315158    8534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 10:56:38.318740    8534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1008 10:56:38.325400    8534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 10:56:38.346138    8534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1008 10:56:38.353280    8534 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1008 10:56:38.355267    8534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:38.443873    8534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 10:56:38.450810    8534 certs.go:68] Setting up /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000 for IP: 10.0.2.15
	I1008 10:56:38.450829    8534 certs.go:194] generating shared ca certs ...
	I1008 10:56:38.450842    8534 certs.go:226] acquiring lock for ca certs: {Name:mkb70c9691d78e2ecd0076f3f0607577e8eefb22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:38.451028    8534 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.key
	I1008 10:56:38.451068    8534 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/proxy-client-ca.key
	I1008 10:56:38.451074    8534 certs.go:256] generating profile certs ...
	I1008 10:56:38.451136    8534 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/client.key
	I1008 10:56:38.451156    8534 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.key.c7af3328
	I1008 10:56:38.451170    8534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.crt.c7af3328 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1008 10:56:38.506191    8534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.crt.c7af3328 ...
	I1008 10:56:38.506207    8534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.crt.c7af3328: {Name:mkcc83885ed6de6bc78b832de69b92f50e4770e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:38.506537    8534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.key.c7af3328 ...
	I1008 10:56:38.506543    8534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.key.c7af3328: {Name:mk70ebdbfc3de979abf7675c67172a258f406809 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:38.506712    8534 certs.go:381] copying /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.crt.c7af3328 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.crt
	I1008 10:56:38.506823    8534 certs.go:385] copying /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.key.c7af3328 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.key
	I1008 10:56:38.506958    8534 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/proxy-client.key
	I1008 10:56:38.507093    8534 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/6907.pem (1338 bytes)
	W1008 10:56:38.507118    8534 certs.go:480] ignoring /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/6907_empty.pem, impossibly tiny 0 bytes
	I1008 10:56:38.507126    8534 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca-key.pem (1679 bytes)
	I1008 10:56:38.507148    8534 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem (1078 bytes)
	I1008 10:56:38.507165    8534 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem (1123 bytes)
	I1008 10:56:38.507184    8534 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/key.pem (1679 bytes)
	I1008 10:56:38.507222    8534 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem (1708 bytes)
	I1008 10:56:38.507570    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 10:56:38.519761    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 10:56:38.531297    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 10:56:38.544070    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1008 10:56:38.557182    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 10:56:38.566164    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 10:56:38.577380    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 10:56:38.589742    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 10:56:38.603984    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/6907.pem --> /usr/share/ca-certificates/6907.pem (1338 bytes)
	I1008 10:56:38.615702    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem --> /usr/share/ca-certificates/69072.pem (1708 bytes)
	I1008 10:56:38.628230    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 10:56:38.635451    8534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 10:56:38.647081    8534 ssh_runner.go:195] Run: openssl version
	I1008 10:56:38.650325    8534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6907.pem && ln -fs /usr/share/ca-certificates/6907.pem /etc/ssl/certs/6907.pem"
	I1008 10:56:38.658599    8534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6907.pem
	I1008 10:56:38.662341    8534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:43 /usr/share/ca-certificates/6907.pem
	I1008 10:56:38.662380    8534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6907.pem
	I1008 10:56:38.666211    8534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6907.pem /etc/ssl/certs/51391683.0"
	I1008 10:56:38.672168    8534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69072.pem && ln -fs /usr/share/ca-certificates/69072.pem /etc/ssl/certs/69072.pem"
	I1008 10:56:38.675684    8534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69072.pem
	I1008 10:56:38.679334    8534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:43 /usr/share/ca-certificates/69072.pem
	I1008 10:56:38.679383    8534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69072.pem
	I1008 10:56:38.681362    8534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69072.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 10:56:38.684687    8534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 10:56:38.688246    8534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 10:56:38.693165    8534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:55 /usr/share/ca-certificates/minikubeCA.pem
	I1008 10:56:38.693221    8534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 10:56:38.695868    8534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 10:56:38.704692    8534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 10:56:38.710715    8534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 10:56:38.714038    8534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 10:56:38.716247    8534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 10:56:38.725659    8534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 10:56:38.730479    8534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 10:56:38.733306    8534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 10:56:38.736022    8534 kubeadm.go:392] StartCluster: {Name:running-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51326 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1008 10:56:38.736119    8534 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1008 10:56:38.754233    8534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 10:56:38.763925    8534 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 10:56:38.763934    8534 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 10:56:38.764002    8534 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 10:56:38.768216    8534 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 10:56:38.768546    8534 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-967000" does not appear in /Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:56:38.768650    8534 kubeconfig.go:62] /Users/jenkins/minikube-integration/19774-6384/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-967000" cluster setting kubeconfig missing "running-upgrade-967000" context setting]
	I1008 10:56:38.769252    8534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/kubeconfig: {Name:mk301b17dd40bdbbbe99e75bcafc6142cf217159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:38.769670    8534 kapi.go:59] client config for running-upgrade-967000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/client.key", CAFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060880f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 10:56:38.770026    8534 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 10:56:38.773122    8534 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-967000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1008 10:56:38.773131    8534 kubeadm.go:1160] stopping kube-system containers ...
	I1008 10:56:38.773183    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1008 10:56:38.796676    8534 docker.go:483] Stopping containers: [dd87d47a322c 2d3a13ff6b91 7ae29440d84a a24cf16eff8a 1f931536e1ad 95eb0774c47b 3a415fbebc63 af880d816398 38f5e47e0e83 86d190fbb173 69180d4d04b0 cd190ed050a6 8484d7c0f593 cdc5f2e0c4f3 b3d318f8155e c99057fc3b4f cd5622ea9ada c84a40b214e0 c0ecc3779b41]
	I1008 10:56:38.796836    8534 ssh_runner.go:195] Run: docker stop dd87d47a322c 2d3a13ff6b91 7ae29440d84a a24cf16eff8a 1f931536e1ad 95eb0774c47b 3a415fbebc63 af880d816398 38f5e47e0e83 86d190fbb173 69180d4d04b0 cd190ed050a6 8484d7c0f593 cdc5f2e0c4f3 b3d318f8155e c99057fc3b4f cd5622ea9ada c84a40b214e0 c0ecc3779b41
	I1008 10:56:40.029245    8534 ssh_runner.go:235] Completed: docker stop dd87d47a322c 2d3a13ff6b91 7ae29440d84a a24cf16eff8a 1f931536e1ad 95eb0774c47b 3a415fbebc63 af880d816398 38f5e47e0e83 86d190fbb173 69180d4d04b0 cd190ed050a6 8484d7c0f593 cdc5f2e0c4f3 b3d318f8155e c99057fc3b4f cd5622ea9ada c84a40b214e0 c0ecc3779b41: (1.232394083s)
	I1008 10:56:40.029353    8534 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 10:56:40.111599    8534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 10:56:40.115326    8534 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct  8 17:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Oct  8 17:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct  8 17:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Oct  8 17:56 /etc/kubernetes/scheduler.conf
	
	I1008 10:56:40.115370    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/admin.conf
	I1008 10:56:40.118686    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 10:56:40.118718    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 10:56:40.121763    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/kubelet.conf
	I1008 10:56:40.124653    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 10:56:40.124686    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 10:56:40.127614    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/controller-manager.conf
	I1008 10:56:40.130402    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 10:56:40.130434    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 10:56:40.133290    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/scheduler.conf
	I1008 10:56:40.136203    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 10:56:40.136237    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 10:56:40.139234    8534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 10:56:40.144100    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:40.167223    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:40.602116    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:40.858321    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:40.892835    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:40.916618    8534 api_server.go:52] waiting for apiserver process to appear ...
	I1008 10:56:40.916702    8534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 10:56:41.418814    8534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 10:56:41.918769    8534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 10:56:41.923085    8534 api_server.go:72] duration metric: took 1.006479791s to wait for apiserver process to appear ...
	I1008 10:56:41.923094    8534 api_server.go:88] waiting for apiserver healthz status ...
	I1008 10:56:41.923105    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:56:46.925289    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:56:46.925370    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:56:51.926037    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:56:51.926101    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:56:56.926787    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:56:56.926833    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:01.927672    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:01.927755    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:06.928883    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:06.928968    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:11.930696    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:11.930752    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:16.932688    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:16.932730    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:21.935032    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:21.935074    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:26.937414    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:26.937439    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:31.939634    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:31.939657    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:36.941943    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:36.941975    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:41.944195    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:41.944357    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:57:41.966030    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:57:41.966134    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:57:41.979245    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:57:41.979332    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:57:41.990227    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:57:41.990305    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:57:42.003295    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:57:42.003384    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:57:42.013606    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:57:42.013676    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:57:42.026390    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:57:42.026479    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:57:42.037529    8534 logs.go:282] 0 containers: []
	W1008 10:57:42.037541    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:57:42.037607    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:57:42.048155    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:57:42.048176    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:57:42.048180    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:57:42.060794    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:57:42.060805    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:57:42.084283    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:57:42.084294    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:57:42.089047    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:57:42.089054    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:57:42.197653    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:57:42.197665    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:57:42.209308    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:57:42.209321    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:57:42.220969    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:57:42.220980    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:57:42.232586    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:57:42.232595    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:57:42.244610    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:57:42.244623    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:57:42.256311    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:57:42.256321    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:57:42.273555    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:57:42.273565    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:57:42.286153    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:57:42.286167    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:57:42.297524    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:57:42.297534    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:57:42.324169    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:57:42.324182    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:57:42.339831    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:57:42.339928    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:57:42.364656    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:57:42.364664    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:57:42.381236    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:57:42.381251    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:57:42.394602    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:57:42.394616    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:57:42.407070    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:57:42.407085    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:57:42.407110    8534 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1008 10:57:42.407114    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:57:42.407121    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:57:42.407124    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:57:42.407127    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:57:52.411340    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:57.413441    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:57.413631    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:57:57.437365    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:57:57.437483    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:57:57.453750    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:57:57.453841    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:57:57.466560    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:57:57.466640    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:57:57.477764    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:57:57.477849    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:57:57.488201    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:57:57.488281    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:57:57.498855    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:57:57.498954    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:57:57.512326    8534 logs.go:282] 0 containers: []
	W1008 10:57:57.512341    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:57:57.512410    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:57:57.523448    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:57:57.523464    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:57:57.523470    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:57:57.538148    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:57:57.538253    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:57:57.563891    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:57:57.563901    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:57:57.576727    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:57:57.576737    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:57:57.589164    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:57:57.589174    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:57:57.602367    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:57:57.602382    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:57:57.616017    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:57:57.616031    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:57:57.631298    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:57:57.631311    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:57:57.644533    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:57:57.644542    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:57:57.661657    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:57:57.661668    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:57:57.673057    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:57:57.673067    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:57:57.685286    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:57:57.685300    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:57:57.723293    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:57:57.723305    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:57:57.737365    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:57:57.737377    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:57:57.753416    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:57:57.753427    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:57:57.765087    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:57:57.765098    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:57:57.790070    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:57:57.790077    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:57:57.794850    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:57:57.794862    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:57:57.806238    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:57:57.806247    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:57:57.806272    8534 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1008 10:57:57.806276    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:57:57.806279    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:57:57.806282    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:57:57.806285    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:58:07.810405    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:12.812694    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:12.812893    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:12.830817    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:58:12.830942    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:12.844531    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:58:12.844605    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:12.855769    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:58:12.855850    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:12.866721    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:58:12.866809    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:12.877620    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:58:12.877699    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:12.888164    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:58:12.888245    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:12.901565    8534 logs.go:282] 0 containers: []
	W1008 10:58:12.901576    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:12.901636    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:12.912247    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:58:12.912273    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:58:12.912279    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:58:12.926458    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:58:12.926471    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:58:12.944711    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:58:12.944722    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:58:12.956416    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:58:12.956428    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:12.968935    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:58:12.968946    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:58:12.982128    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:58:12.982140    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:58:12.999568    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:58:12.999580    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:58:13.017063    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:58:13.017074    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:58:13.028445    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:58:13.028456    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:58:13.039975    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:58:13.039988    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:58:13.051061    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:58:13.051074    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:58:13.073131    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:13.073140    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:13.077273    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:13.077282    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:13.113826    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:58:13.113837    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:58:13.127677    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:58:13.127687    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:58:13.139195    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:13.139208    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:13.164616    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:13.164626    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:58:13.177537    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:13.177639    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:13.202821    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:13.202828    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:58:13.202852    8534 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1008 10:58:13.202856    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:13.202869    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:13.202873    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:13.202876    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:58:23.207084    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:28.209932    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:28.210259    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:28.241324    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:58:28.241448    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:28.257929    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:58:28.258016    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:28.271534    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:58:28.271624    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:28.286091    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:58:28.286177    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:28.296890    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:58:28.296958    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:28.307450    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:58:28.307536    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:28.317426    8534 logs.go:282] 0 containers: []
	W1008 10:58:28.317437    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:28.317502    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:28.333047    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:58:28.333064    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:58:28.333069    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:58:28.345775    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:58:28.345790    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:58:28.357129    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:58:28.357145    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:58:28.375390    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:58:28.375405    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:58:28.387484    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:28.387495    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:58:28.402970    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:28.403067    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:28.427761    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:58:28.427769    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:58:28.441781    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:58:28.441796    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:58:28.456379    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:58:28.456389    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:58:28.468024    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:58:28.468039    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:58:28.479894    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:58:28.479907    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:58:28.492057    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:28.492067    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:28.533915    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:58:28.533930    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:58:28.546558    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:58:28.546571    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:58:28.558187    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:58:28.558197    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:58:28.569218    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:58:28.569234    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:28.581903    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:28.581917    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:28.586888    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:28.586895    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:28.613774    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:28.613784    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:58:28.613812    8534 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1008 10:58:28.613816    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:28.613819    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:28.613822    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:28.613825    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:58:38.618050    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:43.620816    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:43.620999    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:43.637851    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:58:43.637949    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:43.650874    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:58:43.650950    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:43.661519    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:58:43.661601    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:43.672005    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:58:43.672082    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:43.682803    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:58:43.682881    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:43.693954    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:58:43.694031    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:43.704541    8534 logs.go:282] 0 containers: []
	W1008 10:58:43.704553    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:43.704627    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:43.717467    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:58:43.717484    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:43.717490    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:58:43.731268    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:43.731371    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:43.756450    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:58:43.756461    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:58:43.767797    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:43.767808    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:43.791651    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:58:43.791659    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:58:43.805823    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:58:43.805833    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:58:43.817622    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:58:43.817634    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:58:43.829341    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:58:43.829353    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:58:43.840778    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:58:43.840789    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:58:43.852500    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:58:43.852511    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:58:43.865586    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:58:43.865597    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:58:43.880597    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:58:43.880613    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:43.892631    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:43.892645    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:43.896912    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:43.896919    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:43.931240    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:58:43.931253    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:58:43.945800    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:58:43.945809    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:58:43.958133    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:58:43.958144    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:58:43.975532    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:58:43.975543    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:58:43.986661    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:43.986673    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:58:43.986702    8534 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1008 10:58:43.986709    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:43.986714    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:43.986729    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:43.986732    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:58:53.989565    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:58.992030    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:58.992298    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:59.018828    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:58:59.018966    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:59.035753    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:58:59.035855    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:59.048871    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:58:59.048959    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:59.060032    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:58:59.060116    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:59.070582    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:58:59.070667    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:59.080907    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:58:59.080990    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:59.091101    8534 logs.go:282] 0 containers: []
	W1008 10:58:59.091111    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:59.091172    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:59.101435    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:58:59.101452    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:58:59.101459    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:58:59.121004    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:58:59.121017    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:59.134568    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:59.134579    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:58:59.150336    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:59.150435    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:59.175276    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:59.175283    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:59.212167    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:58:59.212177    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:58:59.225569    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:58:59.225581    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:58:59.236607    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:58:59.236620    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:58:59.248463    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:58:59.248478    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:58:59.260126    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:59.260138    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:59.264708    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:58:59.264716    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:58:59.276931    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:58:59.276941    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:58:59.288739    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:58:59.288751    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:58:59.300390    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:59.300399    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:59.325800    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:58:59.325809    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:58:59.341319    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:58:59.341330    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:58:59.355416    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:58:59.355427    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:58:59.376421    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:58:59.376437    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:58:59.387924    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:59.387936    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:58:59.387964    8534 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1008 10:58:59.387969    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:59.387977    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:59.388052    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:59.388087    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:59:09.392242    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:14.392660    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:14.392855    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:14.407308    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:59:14.407405    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:14.418734    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:59:14.418818    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:14.429305    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:59:14.429384    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:14.440155    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:59:14.440230    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:14.450519    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:59:14.450600    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:14.461284    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:59:14.461363    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:14.472329    8534 logs.go:282] 0 containers: []
	W1008 10:59:14.472342    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:14.472415    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:14.482866    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:59:14.482882    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:59:14.482888    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:59:14.500433    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:59:14.500442    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:59:14.512378    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:14.512390    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:14.516683    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:59:14.516692    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:59:14.529721    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:59:14.529732    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:59:14.540633    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:59:14.540644    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:59:14.552456    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:14.552466    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:59:14.565741    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:59:14.565839    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:59:14.590337    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:59:14.590345    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:59:14.602340    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:59:14.602350    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:59:14.613391    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:59:14.613402    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:59:14.624758    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:59:14.624770    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:59:14.636964    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:14.636975    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:14.661908    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:59:14.661917    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:14.673905    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:14.673916    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:14.712516    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:59:14.712528    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:59:14.727386    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:59:14.727397    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:59:14.741711    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:59:14.741723    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:59:14.753727    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:59:14.753741    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:59:14.753767    8534 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1008 10:59:14.753771    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:59:14.753775    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:59:14.753779    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:59:14.753782    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:59:24.757958    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:29.760558    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:29.760661    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:29.772158    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:59:29.772239    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:29.782593    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:59:29.782662    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:29.792806    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:59:29.792888    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:29.803709    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:59:29.803781    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:29.814898    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:59:29.814984    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:29.825339    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:59:29.825417    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:29.835449    8534 logs.go:282] 0 containers: []
	W1008 10:59:29.835468    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:29.835537    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:29.846386    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:59:29.846405    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:59:29.846410    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:59:29.858448    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:59:29.858459    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:59:29.870242    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:59:29.870253    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:59:29.884495    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:59:29.884507    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:59:29.897056    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:59:29.897071    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:59:29.911404    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:29.911420    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:29.935318    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:59:29.935327    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:29.947853    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:29.947863    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:59:29.961911    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:59:29.962011    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:59:29.986813    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:59:29.986821    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:59:30.000285    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:59:30.000299    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:59:30.012188    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:59:30.012202    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:59:30.023183    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:59:30.023194    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:59:30.034362    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:59:30.034375    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:59:30.046699    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:30.046711    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:30.051520    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:30.051528    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:30.086295    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:59:30.086309    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:59:30.102483    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:59:30.102497    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:59:30.120048    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:59:30.120061    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:59:30.120087    8534 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1008 10:59:30.120092    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:59:30.120095    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:59:30.120099    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:59:30.120102    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:59:40.124213    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:45.127469    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:45.127577    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:45.140149    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:59:45.140229    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:45.152163    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:59:45.152240    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:45.163469    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:59:45.163550    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:45.174386    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:59:45.174467    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:45.186064    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:59:45.186136    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:45.198560    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:59:45.198635    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:45.210150    8534 logs.go:282] 0 containers: []
	W1008 10:59:45.210163    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:45.210230    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:45.221517    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:59:45.221537    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:45.221542    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:59:45.236933    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:59:45.237034    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:59:45.262878    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:45.262905    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:45.267858    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:45.267867    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:45.292500    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:59:45.292517    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:59:45.306865    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:59:45.306878    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:59:45.320664    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:59:45.320677    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:59:45.336341    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:59:45.336353    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:59:45.349095    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:59:45.349107    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:59:45.361126    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:59:45.361137    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:59:45.375466    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:59:45.375478    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:59:45.393782    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:59:45.393791    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:59:45.409935    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:59:45.409950    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:59:45.422544    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:59:45.422556    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:45.435753    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:45.435768    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:45.481079    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:59:45.481093    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:59:45.493973    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:59:45.493983    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:59:45.511742    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:59:45.511754    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:59:45.526789    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:59:45.526799    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:59:45.526827    8534 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1008 10:59:45.526832    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:59:45.526887    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:59:45.526895    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:59:45.526900    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:59:55.531013    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:00.533535    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:00.534029    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:00.566688    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 11:00:00.566836    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:00.586685    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 11:00:00.586794    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:00.604877    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 11:00:00.604953    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:00.616275    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 11:00:00.616345    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:00.627119    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 11:00:00.627203    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:00.637750    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 11:00:00.637820    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:00.647588    8534 logs.go:282] 0 containers: []
	W1008 11:00:00.647599    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:00.647658    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:00.660000    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 11:00:00.660021    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:00.660028    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:00.664428    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:00.664436    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:00.699849    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 11:00:00.699859    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 11:00:00.712152    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 11:00:00.712166    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 11:00:00.731057    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 11:00:00.731070    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 11:00:00.743407    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:00.743420    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 11:00:00.759765    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 11:00:00.759864    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 11:00:00.785015    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 11:00:00.785025    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 11:00:00.800169    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 11:00:00.800182    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 11:00:00.816887    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 11:00:00.816899    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 11:00:00.828900    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:00:00.828912    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:00.841577    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 11:00:00.841590    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 11:00:00.854220    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 11:00:00.854232    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 11:00:00.867750    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 11:00:00.867762    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 11:00:00.878584    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 11:00:00.878598    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 11:00:00.891309    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 11:00:00.891320    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 11:00:00.902862    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 11:00:00.902875    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 11:00:00.914369    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:00.914381    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:00.939423    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 11:00:00.939436    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 11:00:00.939471    8534 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1008 11:00:00.939485    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 11:00:00.939498    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 11:00:00.939512    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 11:00:00.939516    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:00:10.943716    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:15.946180    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:15.946457    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:15.963337    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 11:00:15.963435    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:15.976690    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 11:00:15.976762    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:15.989372    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 11:00:15.989456    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:16.000239    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 11:00:16.000328    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:16.011005    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 11:00:16.011082    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:16.021796    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 11:00:16.021873    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:16.032788    8534 logs.go:282] 0 containers: []
	W1008 11:00:16.032801    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:16.032876    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:16.043358    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 11:00:16.043381    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 11:00:16.043387    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 11:00:16.056960    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 11:00:16.056973    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 11:00:16.074623    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 11:00:16.074634    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 11:00:16.086852    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:16.086863    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:16.110155    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 11:00:16.110163    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 11:00:16.124012    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 11:00:16.124022    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 11:00:16.137173    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 11:00:16.137188    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 11:00:16.148440    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 11:00:16.148455    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 11:00:16.159486    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 11:00:16.159496    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 11:00:16.171980    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:16.171992    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 11:00:16.187206    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 11:00:16.187308    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 11:00:16.212108    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:16.212114    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:16.216373    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:16.216380    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:16.253653    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 11:00:16.253667    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 11:00:16.275023    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 11:00:16.275036    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 11:00:16.292747    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 11:00:16.292762    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 11:00:16.306948    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 11:00:16.306958    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 11:00:16.319988    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:00:16.319999    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:16.332443    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 11:00:16.332457    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 11:00:16.332484    8534 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1008 11:00:16.332491    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 11:00:16.332494    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 11:00:16.332498    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 11:00:16.332501    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:00:26.333827    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:31.336146    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:31.336624    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:31.367606    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 11:00:31.367759    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:31.386885    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 11:00:31.386991    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:31.401333    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 11:00:31.401424    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:31.418241    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 11:00:31.418327    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:31.436699    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 11:00:31.436774    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:31.451274    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 11:00:31.451351    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:31.464569    8534 logs.go:282] 0 containers: []
	W1008 11:00:31.464581    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:31.464656    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:31.476344    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 11:00:31.476363    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 11:00:31.476369    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 11:00:31.489587    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 11:00:31.489600    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 11:00:31.501279    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:31.501292    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:31.526696    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 11:00:31.526706    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 11:00:31.538455    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:00:31.538468    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:31.550921    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:31.550933    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 11:00:31.565033    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 11:00:31.565137    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 11:00:31.590415    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 11:00:31.590422    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 11:00:31.604857    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 11:00:31.604866    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 11:00:31.618279    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 11:00:31.618293    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 11:00:31.630444    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 11:00:31.630454    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 11:00:31.648513    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:31.648523    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:31.652791    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 11:00:31.652796    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 11:00:31.665781    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:31.665790    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:31.700408    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 11:00:31.700419    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 11:00:31.714035    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 11:00:31.714046    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 11:00:31.725411    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 11:00:31.725426    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 11:00:31.737169    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 11:00:31.737182    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 11:00:31.748457    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 11:00:31.748470    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 11:00:31.748503    8534 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W1008 11:00:31.748508    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 11:00:31.748516    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	  Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 11:00:31.748519    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 11:00:31.748522    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:00:41.752611    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:46.754893    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:46.755050    8534 kubeadm.go:597] duration metric: took 4m7.99172875s to restartPrimaryControlPlane
	W1008 11:00:46.755152    8534 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 11:00:46.755204    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1008 11:00:47.788089    8534 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.03287275s)
	I1008 11:00:47.788178    8534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 11:00:47.793367    8534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 11:00:47.796591    8534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 11:00:47.799462    8534 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 11:00:47.799466    8534 kubeadm.go:157] found existing configuration files:
	
	I1008 11:00:47.799494    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/admin.conf
	I1008 11:00:47.802292    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 11:00:47.802330    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 11:00:47.805107    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/kubelet.conf
	I1008 11:00:47.807739    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 11:00:47.807770    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 11:00:47.811048    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/controller-manager.conf
	I1008 11:00:47.814193    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 11:00:47.814220    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 11:00:47.816782    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/scheduler.conf
	I1008 11:00:47.819584    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 11:00:47.819607    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 11:00:47.822770    8534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 11:00:47.843327    8534 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1008 11:00:47.843363    8534 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 11:00:47.892895    8534 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 11:00:47.893045    8534 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 11:00:47.893144    8534 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 11:00:47.950053    8534 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 11:00:47.955672    8534 out.go:235]   - Generating certificates and keys ...
	I1008 11:00:47.955743    8534 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 11:00:47.955777    8534 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 11:00:47.955825    8534 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 11:00:47.955861    8534 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 11:00:47.955900    8534 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 11:00:47.956980    8534 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 11:00:47.957014    8534 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 11:00:47.957044    8534 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 11:00:47.957104    8534 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 11:00:47.957189    8534 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 11:00:47.957213    8534 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 11:00:47.957285    8534 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 11:00:48.074879    8534 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 11:00:48.203572    8534 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 11:00:48.312210    8534 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 11:00:48.462380    8534 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 11:00:48.494250    8534 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 11:00:48.494531    8534 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 11:00:48.494601    8534 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 11:00:48.587750    8534 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 11:00:48.591227    8534 out.go:235]   - Booting up control plane ...
	I1008 11:00:48.591433    8534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 11:00:48.592617    8534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 11:00:48.592925    8534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 11:00:48.594432    8534 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 11:00:48.594543    8534 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 11:00:53.598595    8534 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.004099 seconds
	I1008 11:00:53.598766    8534 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 11:00:53.609484    8534 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 11:00:54.119775    8534 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 11:00:54.119880    8534 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-967000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 11:00:54.625707    8534 kubeadm.go:310] [bootstrap-token] Using token: tlxz13.k2jch30i1blbq3wh
	I1008 11:00:54.629129    8534 out.go:235]   - Configuring RBAC rules ...
	I1008 11:00:54.629208    8534 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 11:00:54.629318    8534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 11:00:54.636321    8534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 11:00:54.637552    8534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 11:00:54.638810    8534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 11:00:54.639972    8534 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 11:00:54.644109    8534 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 11:00:54.806113    8534 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 11:00:55.031183    8534 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 11:00:55.031999    8534 kubeadm.go:310] 
	I1008 11:00:55.032032    8534 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 11:00:55.032036    8534 kubeadm.go:310] 
	I1008 11:00:55.032075    8534 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 11:00:55.032081    8534 kubeadm.go:310] 
	I1008 11:00:55.032102    8534 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 11:00:55.032131    8534 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 11:00:55.032158    8534 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 11:00:55.032161    8534 kubeadm.go:310] 
	I1008 11:00:55.032198    8534 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 11:00:55.032202    8534 kubeadm.go:310] 
	I1008 11:00:55.032234    8534 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 11:00:55.032239    8534 kubeadm.go:310] 
	I1008 11:00:55.032267    8534 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 11:00:55.032308    8534 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 11:00:55.032346    8534 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 11:00:55.032350    8534 kubeadm.go:310] 
	I1008 11:00:55.032404    8534 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 11:00:55.032442    8534 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 11:00:55.032445    8534 kubeadm.go:310] 
	I1008 11:00:55.032483    8534 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tlxz13.k2jch30i1blbq3wh \
	I1008 11:00:55.032537    8534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62e893a61543438a55113fac81ed4f49345f71ff8f12e8a170334491d7def86a \
	I1008 11:00:55.032548    8534 kubeadm.go:310] 	--control-plane 
	I1008 11:00:55.032553    8534 kubeadm.go:310] 
	I1008 11:00:55.032612    8534 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 11:00:55.032618    8534 kubeadm.go:310] 
	I1008 11:00:55.032654    8534 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tlxz13.k2jch30i1blbq3wh \
	I1008 11:00:55.032705    8534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62e893a61543438a55113fac81ed4f49345f71ff8f12e8a170334491d7def86a 
	I1008 11:00:55.032846    8534 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 11:00:55.032853    8534 cni.go:84] Creating CNI manager for ""
	I1008 11:00:55.032861    8534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:00:55.037137    8534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 11:00:55.043274    8534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 11:00:55.046271    8534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 11:00:55.051396    8534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 11:00:55.051453    8534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 11:00:55.051481    8534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-967000 minikube.k8s.io/updated_at=2024_10_08T11_00_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=running-upgrade-967000 minikube.k8s.io/primary=true
	I1008 11:00:55.094747    8534 ops.go:34] apiserver oom_adj: -16
	I1008 11:00:55.094748    8534 kubeadm.go:1113] duration metric: took 43.339042ms to wait for elevateKubeSystemPrivileges
	I1008 11:00:55.094762    8534 kubeadm.go:394] duration metric: took 4m16.359387291s to StartCluster
	I1008 11:00:55.094772    8534 settings.go:142] acquiring lock: {Name:mk8a824673b36585a3cfee48bd81254259b5c84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:00:55.094856    8534 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:00:55.095279    8534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/kubeconfig: {Name:mk301b17dd40bdbbbe99e75bcafc6142cf217159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:00:55.095482    8534 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:00:55.095506    8534 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 11:00:55.095541    8534 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-967000"
	I1008 11:00:55.095549    8534 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-967000"
	W1008 11:00:55.095553    8534 addons.go:243] addon storage-provisioner should already be in state true
	I1008 11:00:55.095564    8534 host.go:66] Checking if "running-upgrade-967000" exists ...
	I1008 11:00:55.095577    8534 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-967000"
	I1008 11:00:55.095585    8534 config.go:182] Loaded profile config "running-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1008 11:00:55.095592    8534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-967000"
	I1008 11:00:55.097152    8534 kapi.go:59] client config for running-upgrade-967000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/client.key", CAFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060880f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 11:00:55.097279    8534 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-967000"
	W1008 11:00:55.097284    8534 addons.go:243] addon default-storageclass should already be in state true
	I1008 11:00:55.097292    8534 host.go:66] Checking if "running-upgrade-967000" exists ...
	I1008 11:00:55.100145    8534 out.go:177] * Verifying Kubernetes components...
	I1008 11:00:55.100486    8534 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 11:00:55.104351    8534 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 11:00:55.104358    8534 sshutil.go:53] new ssh client: &{IP:localhost Port:51233 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/running-upgrade-967000/id_rsa Username:docker}
	I1008 11:00:55.108044    8534 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 11:00:55.112105    8534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 11:00:55.116019    8534 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 11:00:55.116025    8534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 11:00:55.116031    8534 sshutil.go:53] new ssh client: &{IP:localhost Port:51233 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/running-upgrade-967000/id_rsa Username:docker}
	I1008 11:00:55.185279    8534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 11:00:55.190373    8534 api_server.go:52] waiting for apiserver process to appear ...
	I1008 11:00:55.190419    8534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 11:00:55.194032    8534 api_server.go:72] duration metric: took 98.540083ms to wait for apiserver process to appear ...
	I1008 11:00:55.194042    8534 api_server.go:88] waiting for apiserver healthz status ...
	I1008 11:00:55.194050    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:55.208756    8534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 11:00:55.259872    8534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 11:00:55.533302    8534 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 11:00:55.533314    8534 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 11:01:00.194359    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:00.194400    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:05.196129    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:05.196187    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:10.196417    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:10.196442    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:15.196758    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:15.196809    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:20.197297    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:20.197331    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:25.197875    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:25.197924    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1008 11:01:25.534325    8534 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1008 11:01:25.542584    8534 out.go:177] * Enabled addons: storage-provisioner
	I1008 11:01:25.550554    8534 addons.go:510] duration metric: took 30.45513075s for enable addons: enabled=[storage-provisioner]
	I1008 11:01:30.198751    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:30.198784    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:35.199722    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:35.199768    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:40.201167    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:40.201193    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:45.202751    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:45.202793    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:50.204672    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:50.204696    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:55.206817    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:55.206936    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:01:55.217637    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:01:55.217716    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:01:55.227730    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:01:55.227803    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:01:55.238890    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:01:55.238972    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:01:55.249226    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:01:55.249313    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:01:55.259792    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:01:55.259870    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:01:55.271689    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:01:55.271754    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:01:55.282125    8534 logs.go:282] 0 containers: []
	W1008 11:01:55.282136    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:01:55.282206    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:01:55.292978    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:01:55.292992    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:01:55.292998    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:01:55.327460    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:01:55.327467    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:01:55.331800    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:01:55.331808    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:01:55.369987    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:01:55.369999    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:01:55.384206    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:01:55.384219    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:01:55.399406    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:01:55.399420    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:01:55.417061    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:01:55.417072    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:01:55.428618    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:01:55.428630    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:01:55.443212    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:01:55.443226    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:01:55.454710    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:01:55.454724    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:01:55.466170    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:01:55.466182    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:01:55.480588    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:01:55.480599    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:01:55.492223    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:01:55.492238    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:01:58.017889    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:03.020202    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:03.020450    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:03.036986    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:03.037088    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:03.049474    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:03.049560    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:03.064290    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:03.064361    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:03.074397    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:03.074471    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:03.087566    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:03.087653    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:03.097962    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:03.098042    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:03.108424    8534 logs.go:282] 0 containers: []
	W1008 11:02:03.108434    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:03.108493    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:03.119106    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:03.119124    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:03.119130    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:03.156231    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:03.156243    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:03.171736    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:03.171750    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:03.190293    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:03.190309    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:03.218605    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:03.218621    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:03.231198    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:03.231209    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:02:03.243629    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:03.243642    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:03.257791    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:03.257802    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:03.292898    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:03.292914    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:03.297828    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:03.297836    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:03.312167    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:03.312183    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:03.326061    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:03.326071    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:03.337903    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:03.337915    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:05.851690    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:10.853976    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:10.854165    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:10.872381    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:10.872452    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:10.884946    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:10.885017    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:10.895464    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:10.895545    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:10.906048    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:10.906127    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:10.916240    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:10.916307    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:10.927047    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:10.927119    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:10.937275    8534 logs.go:282] 0 containers: []
	W1008 11:02:10.937287    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:10.937349    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:10.948049    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:10.948065    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:10.948070    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:10.985772    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:10.985784    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:11.005413    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:11.005424    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:11.018098    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:11.018109    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:02:11.031904    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:11.031915    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:11.043617    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:11.043629    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:11.055719    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:11.055729    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:11.092613    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:11.092622    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:11.097364    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:11.097371    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:11.110921    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:11.110931    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:11.122704    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:11.122715    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:11.141058    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:11.141068    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:11.158195    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:11.158211    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:13.685136    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:18.687157    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:18.687461    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:18.717230    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:18.717375    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:18.734902    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:18.735013    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:18.748966    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:18.749054    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:18.760738    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:18.760815    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:18.776433    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:18.776502    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:18.787239    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:18.787316    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:18.797231    8534 logs.go:282] 0 containers: []
	W1008 11:02:18.797243    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:18.797307    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:18.808660    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:18.808675    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:18.808680    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:18.820150    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:18.820162    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:18.832299    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:18.832311    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:18.846954    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:18.846964    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:18.885682    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:18.885698    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:18.900850    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:18.900863    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:18.915801    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:18.915813    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:18.937922    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:18.937933    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:18.950084    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:18.950097    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:18.974890    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:18.974899    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:18.986698    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:18.986712    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:19.021216    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:19.021224    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:19.025899    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:19.025906    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:02:21.538697    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:26.540981    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:26.541155    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:26.554450    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:26.554522    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:26.565154    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:26.565235    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:26.575801    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:26.575880    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:26.586443    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:26.586508    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:26.596649    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:26.596724    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:26.607334    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:26.607401    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:26.617904    8534 logs.go:282] 0 containers: []
	W1008 11:02:26.617920    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:26.617987    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:26.628397    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:26.628432    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:26.628440    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:26.646793    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:26.646803    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:26.663889    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:26.663903    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:26.688189    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:26.688198    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:26.692873    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:26.692879    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:26.731118    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:26.731133    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:26.748949    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:26.748963    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:26.760168    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:26.760181    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:26.771882    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:26.771897    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:26.786540    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:26.786554    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:02:26.798780    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:26.798795    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:26.810588    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:26.810599    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:26.845010    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:26.845018    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:29.359587    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:34.361883    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:34.362011    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:34.373595    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:34.373684    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:34.387878    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:34.387961    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:34.397988    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:34.398075    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:34.412950    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:34.413028    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:34.423379    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:34.423461    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:34.433850    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:34.433927    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:34.443866    8534 logs.go:282] 0 containers: []
	W1008 11:02:34.443875    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:34.443938    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:34.460199    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:34.460214    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:34.460220    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:34.471915    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:34.471928    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:34.483829    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:34.483844    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:34.519536    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:34.519544    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:34.555060    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:34.555073    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:34.569381    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:34.569392    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:34.581845    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:34.581855    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:34.597976    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:34.597985    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:02:34.619932    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:34.619943    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:34.638345    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:34.638355    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:34.662911    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:34.662918    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:34.667129    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:34.667138    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:34.685269    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:34.685285    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:37.197154    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:42.198388    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:42.198832    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:42.228213    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:42.228360    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:42.250629    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:42.250719    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:42.263459    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:42.263543    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:42.276771    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:42.276849    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:42.287675    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:42.287757    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:42.298497    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:42.298572    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:42.318730    8534 logs.go:282] 0 containers: []
	W1008 11:02:42.318742    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:42.318802    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:42.329427    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:42.329443    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:42.329448    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:42.367156    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:42.367168    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:42.382665    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:42.382680    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:42.394969    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:42.394980    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:42.406969    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:42.406981    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:42.426021    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:42.426033    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:42.451507    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:42.451522    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:42.463088    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:42.463105    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:42.467861    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:42.467867    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:42.509532    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:42.509543    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:42.525807    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:42.525819    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:42.541516    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:42.541531    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:02:42.555308    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:42.555321    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:45.069044    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:50.071243    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:50.071368    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:50.082078    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:50.082177    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:50.092741    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:50.092824    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:50.103049    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:50.103123    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:50.113767    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:50.113835    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:50.124038    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:50.124125    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:50.139254    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:50.139330    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:50.152988    8534 logs.go:282] 0 containers: []
	W1008 11:02:50.152999    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:50.153071    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:50.163495    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:50.163511    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:50.163516    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:02:50.175217    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:50.175228    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:50.186502    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:50.186512    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:50.212162    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:50.212173    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:50.217225    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:50.217231    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:50.229248    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:50.229262    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:50.244969    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:50.244983    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:50.259393    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:50.259406    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:50.272297    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:50.272306    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:50.295905    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:50.295915    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:50.307612    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:50.307627    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:50.344588    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:50.344603    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:50.392536    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:50.392550    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:52.909383    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:57.911604    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:57.911883    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:57.935577    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:57.935690    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:57.951824    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:57.951919    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:57.964895    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:57.964977    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:57.976528    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:57.976605    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:57.987522    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:57.987598    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:57.998356    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:57.998441    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:58.011006    8534 logs.go:282] 0 containers: []
	W1008 11:02:58.011018    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:58.011083    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:58.022616    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:58.022631    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:58.022637    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:58.057603    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:58.057612    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:58.061723    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:58.061730    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:58.097297    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:58.097308    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:58.112256    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:58.112268    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:58.130075    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:58.130086    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:58.141776    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:58.141786    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:58.155850    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:58.155861    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:58.169395    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:58.169406    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:58.188760    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:58.188770    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:58.200266    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:58.200274    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:02:58.218608    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:58.218619    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:58.241964    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:58.241975    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:00.756009    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:05.758251    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:05.758490    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:05.778693    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:03:05.778776    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:05.792781    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:03:05.792863    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:05.804686    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:03:05.804769    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:05.815932    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:03:05.815997    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:05.826694    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:03:05.826779    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:05.837239    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:03:05.837317    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:05.847016    8534 logs.go:282] 0 containers: []
	W1008 11:03:05.847029    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:05.847095    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:05.857645    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:03:05.857659    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:05.857664    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:05.881185    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:05.881192    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:05.885917    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:03:05.885926    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:03:05.900897    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:03:05.900906    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:03:05.924129    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:03:05.924139    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:03:05.936367    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:03:05.936377    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:03:05.948542    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:03:05.948552    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:03:05.966078    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:05.966092    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:06.000780    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:06.000788    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:06.036391    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:03:06.036403    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:03:06.051176    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:03:06.051187    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:03:06.066126    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:03:06.066136    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:03:06.077508    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:03:06.077519    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:08.591556    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:13.593560    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:13.593651    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:13.608549    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:03:13.608636    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:13.619620    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:03:13.619707    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:13.630533    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:03:13.630610    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:13.641336    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:03:13.641415    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:13.652131    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:03:13.652203    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:13.663301    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:03:13.663374    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:13.673481    8534 logs.go:282] 0 containers: []
	W1008 11:03:13.673492    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:13.673557    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:13.683744    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:03:13.683761    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:03:13.683767    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:03:13.698647    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:03:13.698658    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:03:13.718025    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:03:13.718034    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:13.729774    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:03:13.729786    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:03:13.741652    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:03:13.741663    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:03:13.753502    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:13.753514    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:13.758111    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:03:13.758118    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:03:13.769326    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:03:13.769338    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:03:13.780793    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:13.780805    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:13.815368    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:13.815378    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:13.854623    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:03:13.854636    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:03:13.867902    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:13.867913    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:13.893421    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:03:13.893428    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:03:13.908046    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:03:13.908059    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:03:13.922527    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:03:13.922541    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:03:16.440283    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:21.442621    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:21.442872    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:21.466289    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:03:21.466424    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:21.483468    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:03:21.483548    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:21.496258    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:03:21.496345    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:21.507299    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:03:21.507379    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:21.522478    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:03:21.522550    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:21.533138    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:03:21.533216    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:21.544175    8534 logs.go:282] 0 containers: []
	W1008 11:03:21.544188    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:21.544253    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:21.554698    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:03:21.554717    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:03:21.554723    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:03:21.572850    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:21.572868    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:21.611720    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:03:21.611734    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:03:21.623691    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:03:21.623702    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:03:21.635699    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:03:21.635711    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:03:21.647929    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:03:21.647942    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:03:21.659569    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:21.659580    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:21.664316    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:03:21.664324    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:03:21.675842    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:03:21.675853    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:03:21.687761    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:21.687773    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:21.712161    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:03:21.712168    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:21.723932    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:21.723944    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:21.758803    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:03:21.758814    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:03:21.773580    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:03:21.773593    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:03:21.787434    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:03:21.787443    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:03:24.304870    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:29.307098    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:29.307269    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:29.323104    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:03:29.323203    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:29.335667    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:03:29.335755    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:29.346586    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:03:29.346669    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:29.356653    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:03:29.356732    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:29.368389    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:03:29.368469    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:29.378661    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:03:29.378739    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:29.388796    8534 logs.go:282] 0 containers: []
	W1008 11:03:29.388812    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:29.388884    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:29.399699    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:03:29.399714    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:29.399720    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:29.434216    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:29.434225    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:29.438550    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:03:29.438559    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:03:29.449669    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:03:29.449680    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:03:29.461139    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:03:29.461150    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:03:29.473260    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:29.473272    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:29.499391    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:03:29.499401    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:03:29.513774    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:03:29.513789    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:03:29.525486    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:03:29.525497    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:03:29.537151    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:03:29.537161    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:03:29.552397    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:03:29.552407    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:03:29.569679    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:03:29.569689    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:03:29.582100    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:03:29.582113    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:29.593519    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:29.593530    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:29.629931    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:03:29.629943    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:03:32.146291    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:37.148599    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:37.148809    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:37.167688    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:03:37.167795    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:37.182120    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:03:37.182195    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:37.193201    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:03:37.193283    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:37.203843    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:03:37.203923    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:37.217169    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:03:37.217248    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:37.228193    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:03:37.228268    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:37.238718    8534 logs.go:282] 0 containers: []
	W1008 11:03:37.238728    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:37.238794    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:37.249391    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:03:37.249409    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:03:37.249414    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:03:37.264758    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:03:37.264769    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:03:37.277783    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:03:37.277795    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:03:37.297897    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:03:37.297909    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:03:37.309093    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:03:37.309104    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:03:37.324437    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:03:37.324449    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:37.335987    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:37.336001    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:37.370862    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:03:37.370876    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:03:37.385821    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:03:37.385831    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:03:37.397825    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:37.397838    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:37.422757    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:37.422765    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:37.458553    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:03:37.458568    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:03:37.470287    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:03:37.470299    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:03:37.490863    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:03:37.490878    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:03:37.508247    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:37.508257    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:40.015125    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:45.017420    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:45.017595    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:45.031669    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:03:45.031769    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:45.050738    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:03:45.050824    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:45.062341    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:03:45.062414    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:45.072619    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:03:45.072700    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:45.083080    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:03:45.083163    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:45.099420    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:03:45.099495    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:45.112110    8534 logs.go:282] 0 containers: []
	W1008 11:03:45.112125    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:45.112194    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:45.123155    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:03:45.123174    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:03:45.123179    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:03:45.137472    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:03:45.137484    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:03:45.149570    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:03:45.149581    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:45.161028    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:45.161042    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:45.195465    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:03:45.195481    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:03:45.207717    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:03:45.207728    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:03:45.219751    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:45.219763    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:45.245767    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:45.245775    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:45.250140    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:03:45.250149    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:03:45.271288    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:03:45.271300    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:03:45.282982    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:03:45.282992    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:03:45.299754    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:03:45.299765    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:03:45.317400    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:45.317410    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:45.352198    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:03:45.352206    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:03:45.363382    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:03:45.363392    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:03:47.876550    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:52.878851    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:52.878974    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:52.895478    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:03:52.895561    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:52.906808    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:03:52.906892    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:52.917788    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:03:52.917856    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:52.929019    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:03:52.929096    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:52.939599    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:03:52.939680    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:52.952141    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:03:52.952224    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:52.962842    8534 logs.go:282] 0 containers: []
	W1008 11:03:52.962854    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:52.962920    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:52.973315    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:03:52.973335    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:03:52.973339    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:03:52.985396    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:03:52.985408    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:03:53.002538    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:03:53.002552    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:03:53.017191    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:03:53.017208    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:03:53.034544    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:03:53.034555    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:03:53.048994    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:03:53.049004    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:03:53.062921    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:03:53.062932    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:03:53.075951    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:03:53.075963    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:03:53.087661    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:03:53.087672    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:53.099822    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:53.099834    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:53.104858    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:53.104867    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:53.128449    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:03:53.128457    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:03:53.140049    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:53.140059    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:53.175622    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:03:53.175638    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:03:53.187814    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:53.187824    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:55.725182    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:00.726353    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:00.726635    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:00.749262    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:04:00.749366    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:00.775670    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:04:00.775755    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:00.787663    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:04:00.787749    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:00.798074    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:04:00.798152    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:00.809033    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:04:00.809124    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:00.824603    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:04:00.824672    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:00.834765    8534 logs.go:282] 0 containers: []
	W1008 11:04:00.834776    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:00.834843    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:00.845101    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:04:00.845118    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:00.845123    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:00.880054    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:04:00.880066    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:04:00.892462    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:00.892476    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:00.917075    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:00.917083    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:00.952583    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:04:00.952594    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:04:00.964649    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:04:00.964660    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:04:00.976437    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:04:00.976448    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:04:00.993912    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:04:00.993922    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:04:01.005564    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:04:01.005574    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:04:01.020798    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:01.020811    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:01.025863    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:04:01.025887    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:04:01.040510    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:04:01.040524    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:04:01.054832    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:04:01.054843    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:04:01.071672    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:04:01.071686    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:04:01.083388    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:04:01.083403    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:03.597884    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:08.599757    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:08.599984    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:08.615222    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:04:08.615308    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:08.627508    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:04:08.627590    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:08.638676    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:04:08.638755    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:08.649256    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:04:08.649322    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:08.659966    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:04:08.660042    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:08.670535    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:04:08.670608    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:08.681469    8534 logs.go:282] 0 containers: []
	W1008 11:04:08.681481    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:08.681541    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:08.692769    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:04:08.692786    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:04:08.692793    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:04:08.708004    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:04:08.708016    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:04:08.719795    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:04:08.719808    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:04:08.731964    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:04:08.731975    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:04:08.746550    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:04:08.746562    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:04:08.760231    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:04:08.760245    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:04:08.774836    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:04:08.774848    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:04:08.786683    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:04:08.786696    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:04:08.800450    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:04:08.800461    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:08.817086    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:08.817098    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:08.855228    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:04:08.855240    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:04:08.878268    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:04:08.878280    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:04:08.896050    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:08.896061    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:08.919988    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:08.919997    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:08.954833    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:08.954847    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:11.461351    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:16.463712    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:16.464164    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:16.497310    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:04:16.497465    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:16.517167    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:04:16.517268    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:16.532398    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:04:16.532480    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:16.543772    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:04:16.543849    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:16.554844    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:04:16.554925    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:16.565216    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:04:16.565299    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:16.576354    8534 logs.go:282] 0 containers: []
	W1008 11:04:16.576370    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:16.576441    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:16.587343    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:04:16.587364    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:16.587371    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:16.593001    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:04:16.593010    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:04:16.605039    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:16.605049    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:16.630298    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:04:16.630309    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:16.642349    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:16.642361    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:16.677494    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:16.677512    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:16.712240    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:04:16.712252    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:04:16.727117    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:04:16.727128    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:04:16.741538    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:04:16.741549    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:04:16.752858    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:04:16.752873    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:04:16.764871    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:04:16.764882    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:04:16.779682    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:04:16.779695    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:04:16.791128    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:04:16.791138    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:04:16.803370    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:04:16.803380    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:04:16.819037    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:04:16.819048    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:04:19.338849    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:24.341040    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:24.341157    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:24.356888    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:04:24.356973    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:24.367723    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:04:24.367800    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:24.378840    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:04:24.378923    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:24.389297    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:04:24.389375    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:24.399778    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:04:24.399851    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:24.410476    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:04:24.410553    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:24.420995    8534 logs.go:282] 0 containers: []
	W1008 11:04:24.421015    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:24.421080    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:24.432014    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:04:24.432035    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:24.432041    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:24.470034    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:04:24.470046    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:04:24.484556    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:24.484569    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:24.508461    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:24.508469    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:24.513514    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:04:24.513523    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:04:24.527922    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:04:24.527934    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:04:24.539914    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:04:24.539926    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:04:24.551109    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:04:24.551120    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:04:24.567969    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:04:24.567980    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:04:24.580170    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:24.580184    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:24.617666    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:04:24.617678    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:04:24.632851    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:04:24.632866    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:04:24.644867    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:04:24.644879    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:04:24.657076    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:04:24.657089    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:04:24.672419    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:04:24.672433    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:27.186434    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:32.188732    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:32.188922    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:32.201623    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:04:32.201719    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:32.212169    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:04:32.212242    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:32.222921    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:04:32.222992    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:32.233981    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:04:32.234061    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:32.247566    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:04:32.247640    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:32.258535    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:04:32.258615    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:32.270577    8534 logs.go:282] 0 containers: []
	W1008 11:04:32.270591    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:32.270662    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:32.281500    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:04:32.281517    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:04:32.281522    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:04:32.296543    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:04:32.296554    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:04:32.311400    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:32.311412    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:32.347804    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:04:32.347814    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:04:32.359688    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:04:32.359699    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:04:32.381256    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:04:32.381266    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:04:32.393148    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:04:32.393160    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:04:32.404564    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:04:32.404575    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:04:32.416810    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:04:32.416820    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:04:32.429015    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:04:32.429026    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:04:32.440541    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:32.440556    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:32.464210    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:04:32.464218    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:32.477486    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:32.477498    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:32.512554    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:04:32.512566    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:04:32.526869    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:32.526880    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:35.033439    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:39.980305    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:39.980553    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:39.996601    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:04:39.996704    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:40.008904    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:04:40.008976    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:40.019565    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:04:40.019647    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:40.029933    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:04:40.030007    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:40.041224    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:04:40.041323    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:40.052933    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:04:40.053028    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:40.067695    8534 logs.go:282] 0 containers: []
	W1008 11:04:40.067707    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:40.067776    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:40.078234    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:04:40.078253    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:04:40.078261    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:04:40.090083    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:04:40.090096    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:04:40.103210    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:04:40.103223    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:04:40.118085    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:40.118096    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:40.141040    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:04:40.141050    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:04:40.155336    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:04:40.155347    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:04:40.173583    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:04:40.173595    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:40.185391    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:04:40.185403    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:04:40.200553    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:04:40.200564    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:04:40.216564    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:04:40.216576    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:04:40.227896    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:40.227908    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:40.262320    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:40.262332    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:40.267042    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:04:40.267051    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:04:40.278452    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:04:40.278468    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:04:40.289551    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:40.289565    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:42.826131    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:47.828294    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:47.828581    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:47.853239    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:04:47.853375    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:47.870319    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:04:47.870411    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:47.883241    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:04:47.883325    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:47.894570    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:04:47.894643    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:47.904983    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:04:47.905063    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:47.920198    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:04:47.920275    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:47.931254    8534 logs.go:282] 0 containers: []
	W1008 11:04:47.931264    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:47.931333    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:47.943552    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:04:47.943571    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:47.943576    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:47.980569    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:04:47.980581    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:04:47.993236    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:04:47.993248    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:04:48.013026    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:04:48.013041    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:48.026492    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:48.026504    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:48.063927    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:48.063938    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:48.068762    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:04:48.068769    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:04:48.083420    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:04:48.083434    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:04:48.099626    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:48.099641    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:48.123710    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:04:48.123718    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:04:48.135901    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:04:48.135913    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:04:48.147854    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:04:48.147869    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:04:48.165802    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:04:48.165814    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:04:48.178414    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:04:48.178423    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:04:48.193657    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:04:48.193668    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:04:50.707414    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:55.709471    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:55.713953    8534 out.go:201] 
	W1008 11:04:55.718896    8534 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1008 11:04:55.718907    8534 out.go:270] * 
	* 
	W1008 11:04:55.719903    8534 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:04:55.731857    8534 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-967000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:629: *** TestRunningBinaryUpgrade FAILED at 2024-10-08 11:04:55.818277 -0700 PDT m=+1366.441817168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-967000 -n running-upgrade-967000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-967000 -n running-upgrade-967000: exit status 2 (15.585948292s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-967000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-446000 sudo                                | cilium-446000             | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-446000 sudo                                | cilium-446000             | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-446000 sudo cat                            | cilium-446000             | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-446000 sudo cat                            | cilium-446000             | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-446000 sudo                                | cilium-446000             | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-446000 sudo                                | cilium-446000             | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-446000 sudo                                | cilium-446000             | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-446000 sudo cat                            | cilium-446000             | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-446000 sudo cat                            | cilium-446000             | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-446000 sudo                                | cilium-446000             | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-446000 sudo                                | cilium-446000             | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-446000 sudo                                | cilium-446000             | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-446000 sudo find                           | cilium-446000             | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-446000 sudo crio                           | cilium-446000             | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-446000                                     | cilium-446000             | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT | 08 Oct 24 10:54 PDT |
	| start   | -p kubernetes-upgrade-143000                         | kubernetes-upgrade-143000 | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-841000                             | offline-docker-841000     | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT | 08 Oct 24 10:54 PDT |
	| stop    | -p kubernetes-upgrade-143000                         | kubernetes-upgrade-143000 | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT | 08 Oct 24 10:54 PDT |
	| start   | -p kubernetes-upgrade-143000                         | kubernetes-upgrade-143000 | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-810000                            | minikube                  | jenkins | v1.26.0 | 08 Oct 24 10:54 PDT | 08 Oct 24 10:55 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-143000                         | kubernetes-upgrade-143000 | jenkins | v1.34.0 | 08 Oct 24 10:54 PDT | 08 Oct 24 10:54 PDT |
	| start   | -p running-upgrade-967000                            | minikube                  | jenkins | v1.26.0 | 08 Oct 24 10:54 PDT | 08 Oct 24 10:56 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-810000 stop                          | minikube                  | jenkins | v1.26.0 | 08 Oct 24 10:55 PDT | 08 Oct 24 10:56 PDT |
	| start   | -p stopped-upgrade-810000                            | stopped-upgrade-810000    | jenkins | v1.34.0 | 08 Oct 24 10:56 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-967000                            | running-upgrade-967000    | jenkins | v1.34.0 | 08 Oct 24 10:56 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 10:56:18
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 10:56:18.239533    8534 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:56:18.239688    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:56:18.239691    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:56:18.239693    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:56:18.239825    8534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:56:18.240858    8534 out.go:352] Setting JSON to false
	I1008 10:56:18.260853    8534 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5148,"bootTime":1728405030,"procs":567,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:56:18.260918    8534 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:56:18.264958    8534 out.go:177] * [running-upgrade-967000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:56:18.272968    8534 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:56:18.273008    8534 notify.go:220] Checking for updates...
	I1008 10:56:18.280984    8534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:56:18.284006    8534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:56:18.286949    8534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:56:18.290030    8534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:56:18.292977    8534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:56:18.296217    8534 config.go:182] Loaded profile config "running-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1008 10:56:18.299985    8534 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1008 10:56:18.302867    8534 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:56:18.306951    8534 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 10:56:18.312921    8534 start.go:297] selected driver: qemu2
	I1008 10:56:18.312927    8534 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51326 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1008 10:56:18.312990    8534 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:56:18.315381    8534 cni.go:84] Creating CNI manager for ""
	I1008 10:56:18.315409    8534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 10:56:18.315439    8534 start.go:340] cluster config:
	{Name:running-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51326 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1008 10:56:18.315491    8534 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:56:18.323958    8534 out.go:177] * Starting "running-upgrade-967000" primary control-plane node in "running-upgrade-967000" cluster
	I1008 10:56:18.327895    8534 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1008 10:56:18.327908    8534 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1008 10:56:18.327918    8534 cache.go:56] Caching tarball of preloaded images
	I1008 10:56:18.327966    8534 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 10:56:18.327971    8534 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1008 10:56:18.328017    8534 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/config.json ...
	I1008 10:56:18.328408    8534 start.go:360] acquireMachinesLock for running-upgrade-967000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:56:27.427719    8523 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/config.json ...
	I1008 10:56:27.427970    8523 machine.go:93] provisionDockerMachine start ...
	I1008 10:56:27.428024    8523 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:27.428215    8523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fe2480] 0x102fe4cc0 <nil>  [] 0s} localhost 51195 <nil> <nil>}
	I1008 10:56:27.428221    8523 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 10:56:27.478482    8523 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 10:56:27.478531    8523 buildroot.go:166] provisioning hostname "stopped-upgrade-810000"
	I1008 10:56:27.478602    8523 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:27.478719    8523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fe2480] 0x102fe4cc0 <nil>  [] 0s} localhost 51195 <nil> <nil>}
	I1008 10:56:27.478728    8523 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-810000 && echo "stopped-upgrade-810000" | sudo tee /etc/hostname
	I1008 10:56:27.534221    8523 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-810000
	
	I1008 10:56:27.534287    8523 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:27.534391    8523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fe2480] 0x102fe4cc0 <nil>  [] 0s} localhost 51195 <nil> <nil>}
	I1008 10:56:27.534400    8523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-810000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-810000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-810000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 10:56:27.588499    8523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 10:56:27.588517    8523 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19774-6384/.minikube CaCertPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19774-6384/.minikube}
	I1008 10:56:27.588528    8523 buildroot.go:174] setting up certificates
	I1008 10:56:27.588533    8523 provision.go:84] configureAuth start
	I1008 10:56:27.588557    8523 provision.go:143] copyHostCerts
	I1008 10:56:27.588646    8523 exec_runner.go:144] found /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.pem, removing ...
	I1008 10:56:27.589446    8523 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.pem
	I1008 10:56:27.589567    8523 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.pem (1078 bytes)
	I1008 10:56:27.589732    8523 exec_runner.go:144] found /Users/jenkins/minikube-integration/19774-6384/.minikube/cert.pem, removing ...
	I1008 10:56:27.589736    8523 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19774-6384/.minikube/cert.pem
	I1008 10:56:27.589792    8523 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19774-6384/.minikube/cert.pem (1123 bytes)
	I1008 10:56:27.589899    8523 exec_runner.go:144] found /Users/jenkins/minikube-integration/19774-6384/.minikube/key.pem, removing ...
	I1008 10:56:27.589902    8523 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19774-6384/.minikube/key.pem
	I1008 10:56:27.589954    8523 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19774-6384/.minikube/key.pem (1679 bytes)
	I1008 10:56:27.590049    8523 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-810000 san=[127.0.0.1 localhost minikube stopped-upgrade-810000]
	I1008 10:56:27.659656    8523 provision.go:177] copyRemoteCerts
	I1008 10:56:27.659955    8523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 10:56:27.659964    8523 sshutil.go:53] new ssh client: &{IP:localhost Port:51195 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/id_rsa Username:docker}
	I1008 10:56:27.687882    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 10:56:27.694638    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1008 10:56:27.701414    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 10:56:27.708869    8523 provision.go:87] duration metric: took 120.327333ms to configureAuth
	I1008 10:56:27.708880    8523 buildroot.go:189] setting minikube options for container-runtime
	I1008 10:56:27.708988    8523 config.go:182] Loaded profile config "stopped-upgrade-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1008 10:56:27.709046    8523 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:27.709140    8523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fe2480] 0x102fe4cc0 <nil>  [] 0s} localhost 51195 <nil> <nil>}
	I1008 10:56:27.709146    8523 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1008 10:56:27.758595    8523 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1008 10:56:27.758607    8523 buildroot.go:70] root file system type: tmpfs
	I1008 10:56:27.758684    8523 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1008 10:56:27.758750    8523 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:27.758866    8523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fe2480] 0x102fe4cc0 <nil>  [] 0s} localhost 51195 <nil> <nil>}
	I1008 10:56:27.758901    8523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1008 10:56:27.814871    8523 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1008 10:56:27.814945    8523 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:27.815060    8523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fe2480] 0x102fe4cc0 <nil>  [] 0s} localhost 51195 <nil> <nil>}
	I1008 10:56:27.815071    8523 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1008 10:56:28.192106    8523 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1008 10:56:28.192120    8523 machine.go:96] duration metric: took 764.14225ms to provisionDockerMachine
	I1008 10:56:28.192127    8523 start.go:293] postStartSetup for "stopped-upgrade-810000" (driver="qemu2")
	I1008 10:56:28.192133    8523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 10:56:28.192209    8523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 10:56:28.192220    8523 sshutil.go:53] new ssh client: &{IP:localhost Port:51195 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/id_rsa Username:docker}
	I1008 10:56:28.220095    8523 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 10:56:28.221517    8523 info.go:137] Remote host: Buildroot 2021.02.12
	I1008 10:56:28.221528    8523 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19774-6384/.minikube/addons for local assets ...
	I1008 10:56:28.221609    8523 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19774-6384/.minikube/files for local assets ...
	I1008 10:56:28.221754    8523 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem -> 69072.pem in /etc/ssl/certs
	I1008 10:56:28.221904    8523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 10:56:28.225089    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem --> /etc/ssl/certs/69072.pem (1708 bytes)
	I1008 10:56:28.232589    8523 start.go:296] duration metric: took 40.457041ms for postStartSetup
	I1008 10:56:28.232604    8523 fix.go:56] duration metric: took 19.896968042s for fixHost
	I1008 10:56:28.232650    8523 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.232750    8523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fe2480] 0x102fe4cc0 <nil>  [] 0s} localhost 51195 <nil> <nil>}
	I1008 10:56:28.232755    8523 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 10:56:28.286891    8534 start.go:364] duration metric: took 9.958487291s to acquireMachinesLock for "running-upgrade-967000"
	I1008 10:56:28.286923    8534 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:56:28.286927    8534 fix.go:54] fixHost starting: 
	I1008 10:56:28.287652    8534 fix.go:112] recreateIfNeeded on running-upgrade-967000: state=Running err=<nil>
	W1008 10:56:28.287659    8534 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:56:28.291276    8534 out.go:177] * Updating the running qemu2 "running-upgrade-967000" VM ...
	I1008 10:56:28.286815    8523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410188.121122629
	
	I1008 10:56:28.286824    8523 fix.go:216] guest clock: 1728410188.121122629
	I1008 10:56:28.286829    8523 fix.go:229] Guest: 2024-10-08 10:56:28.121122629 -0700 PDT Remote: 2024-10-08 10:56:28.232607 -0700 PDT m=+20.099956751 (delta=-111.484371ms)
	I1008 10:56:28.286839    8523 fix.go:200] guest clock delta is within tolerance: -111.484371ms
	I1008 10:56:28.286844    8523 start.go:83] releasing machines lock for "stopped-upgrade-810000", held for 19.951219375s
	I1008 10:56:28.286937    8523 ssh_runner.go:195] Run: cat /version.json
	I1008 10:56:28.286945    8523 sshutil.go:53] new ssh client: &{IP:localhost Port:51195 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/id_rsa Username:docker}
	I1008 10:56:28.286950    8523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 10:56:28.287206    8523 sshutil.go:53] new ssh client: &{IP:localhost Port:51195 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/id_rsa Username:docker}
	W1008 10:56:28.363024    8523 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1008 10:56:28.363086    8523 ssh_runner.go:195] Run: systemctl --version
	I1008 10:56:28.365078    8523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 10:56:28.366929    8523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 10:56:28.366977    8523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1008 10:56:28.370058    8523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1008 10:56:28.375092    8523 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 10:56:28.375102    8523 start.go:495] detecting cgroup driver to use...
	I1008 10:56:28.375222    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 10:56:28.381699    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1008 10:56:28.385161    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1008 10:56:28.388793    8523 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1008 10:56:28.388827    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1008 10:56:28.392359    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 10:56:28.396059    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1008 10:56:28.399095    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 10:56:28.402030    8523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 10:56:28.405351    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1008 10:56:28.408734    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1008 10:56:28.412330    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1008 10:56:28.415283    8523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 10:56:28.417965    8523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 10:56:28.421248    8523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:28.500121    8523 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1008 10:56:28.505807    8523 start.go:495] detecting cgroup driver to use...
	I1008 10:56:28.505904    8523 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1008 10:56:28.513475    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 10:56:28.518838    8523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 10:56:28.526613    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 10:56:28.531587    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1008 10:56:28.536390    8523 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1008 10:56:28.573411    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1008 10:56:28.578750    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 10:56:28.584141    8523 ssh_runner.go:195] Run: which cri-dockerd
	I1008 10:56:28.585377    8523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1008 10:56:28.588500    8523 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1008 10:56:28.593426    8523 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1008 10:56:28.684808    8523 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1008 10:56:28.766039    8523 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1008 10:56:28.766103    8523 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1008 10:56:28.771916    8523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:28.857843    8523 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1008 10:56:29.990973    8523 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.133107417s)
	I1008 10:56:29.991109    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1008 10:56:29.996984    8523 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1008 10:56:30.004414    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1008 10:56:30.009664    8523 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1008 10:56:30.091409    8523 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1008 10:56:30.170005    8523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:30.248152    8523 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1008 10:56:30.254428    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1008 10:56:30.259685    8523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:30.340108    8523 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1008 10:56:30.380451    8523 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1008 10:56:30.380554    8523 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1008 10:56:30.382663    8523 start.go:563] Will wait 60s for crictl version
	I1008 10:56:30.382727    8523 ssh_runner.go:195] Run: which crictl
	I1008 10:56:30.384266    8523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 10:56:30.399673    8523 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1008 10:56:30.399753    8523 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1008 10:56:30.418880    8523 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1008 10:56:28.297346    8534 machine.go:93] provisionDockerMachine start ...
	I1008 10:56:28.297449    8534 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.297582    8534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104632480] 0x104634cc0 <nil>  [] 0s} localhost 51233 <nil> <nil>}
	I1008 10:56:28.297586    8534 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 10:56:28.361710    8534 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-967000
	
	I1008 10:56:28.361728    8534 buildroot.go:166] provisioning hostname "running-upgrade-967000"
	I1008 10:56:28.361788    8534 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.361911    8534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104632480] 0x104634cc0 <nil>  [] 0s} localhost 51233 <nil> <nil>}
	I1008 10:56:28.361917    8534 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-967000 && echo "running-upgrade-967000" | sudo tee /etc/hostname
	I1008 10:56:28.429475    8534 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-967000
	
	I1008 10:56:28.429544    8534 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.429660    8534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104632480] 0x104634cc0 <nil>  [] 0s} localhost 51233 <nil> <nil>}
	I1008 10:56:28.429670    8534 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-967000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-967000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-967000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 10:56:28.494208    8534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 10:56:28.494224    8534 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19774-6384/.minikube CaCertPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19774-6384/.minikube}
	I1008 10:56:28.494232    8534 buildroot.go:174] setting up certificates
	I1008 10:56:28.494237    8534 provision.go:84] configureAuth start
	I1008 10:56:28.494245    8534 provision.go:143] copyHostCerts
	I1008 10:56:28.494328    8534 exec_runner.go:144] found /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.pem, removing ...
	I1008 10:56:28.494335    8534 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.pem
	I1008 10:56:28.494438    8534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.pem (1078 bytes)
	I1008 10:56:28.494612    8534 exec_runner.go:144] found /Users/jenkins/minikube-integration/19774-6384/.minikube/cert.pem, removing ...
	I1008 10:56:28.494619    8534 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19774-6384/.minikube/cert.pem
	I1008 10:56:28.494661    8534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19774-6384/.minikube/cert.pem (1123 bytes)
	I1008 10:56:28.494769    8534 exec_runner.go:144] found /Users/jenkins/minikube-integration/19774-6384/.minikube/key.pem, removing ...
	I1008 10:56:28.494773    8534 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19774-6384/.minikube/key.pem
	I1008 10:56:28.494810    8534 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19774-6384/.minikube/key.pem (1679 bytes)
	I1008 10:56:28.494908    8534 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-967000 san=[127.0.0.1 localhost minikube running-upgrade-967000]
	I1008 10:56:28.605164    8534 provision.go:177] copyRemoteCerts
	I1008 10:56:28.605214    8534 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 10:56:28.605222    8534 sshutil.go:53] new ssh client: &{IP:localhost Port:51233 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/running-upgrade-967000/id_rsa Username:docker}
	I1008 10:56:28.639233    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1008 10:56:28.649873    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1008 10:56:28.657210    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 10:56:28.664490    8534 provision.go:87] duration metric: took 170.247083ms to configureAuth
	I1008 10:56:28.664503    8534 buildroot.go:189] setting minikube options for container-runtime
	I1008 10:56:28.664615    8534 config.go:182] Loaded profile config "running-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1008 10:56:28.664666    8534 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.664756    8534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104632480] 0x104634cc0 <nil>  [] 0s} localhost 51233 <nil> <nil>}
	I1008 10:56:28.664762    8534 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1008 10:56:28.728476    8534 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1008 10:56:28.728489    8534 buildroot.go:70] root file system type: tmpfs
	I1008 10:56:28.728539    8534 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1008 10:56:28.728608    8534 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.728728    8534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104632480] 0x104634cc0 <nil>  [] 0s} localhost 51233 <nil> <nil>}
	I1008 10:56:28.728762    8534 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1008 10:56:28.793589    8534 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1008 10:56:28.793657    8534 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.793773    8534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104632480] 0x104634cc0 <nil>  [] 0s} localhost 51233 <nil> <nil>}
	I1008 10:56:28.793781    8534 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1008 10:56:28.860804    8534 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 10:56:28.860820    8534 machine.go:96] duration metric: took 563.468791ms to provisionDockerMachine
	I1008 10:56:28.860828    8534 start.go:293] postStartSetup for "running-upgrade-967000" (driver="qemu2")
	I1008 10:56:28.860835    8534 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 10:56:28.860914    8534 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 10:56:28.860923    8534 sshutil.go:53] new ssh client: &{IP:localhost Port:51233 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/running-upgrade-967000/id_rsa Username:docker}
	I1008 10:56:28.896921    8534 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 10:56:28.898407    8534 info.go:137] Remote host: Buildroot 2021.02.12
	I1008 10:56:28.898416    8534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19774-6384/.minikube/addons for local assets ...
	I1008 10:56:28.898481    8534 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19774-6384/.minikube/files for local assets ...
	I1008 10:56:28.898583    8534 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem -> 69072.pem in /etc/ssl/certs
	I1008 10:56:28.898685    8534 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 10:56:28.901233    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem --> /etc/ssl/certs/69072.pem (1708 bytes)
	I1008 10:56:28.908484    8534 start.go:296] duration metric: took 47.650458ms for postStartSetup
	I1008 10:56:28.908498    8534 fix.go:56] duration metric: took 621.572584ms for fixHost
	I1008 10:56:28.908544    8534 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.908654    8534 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104632480] 0x104634cc0 <nil>  [] 0s} localhost 51233 <nil> <nil>}
	I1008 10:56:28.908661    8534 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 10:56:28.974540    8534 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410188.918847935
	
	I1008 10:56:28.974555    8534 fix.go:216] guest clock: 1728410188.918847935
	I1008 10:56:28.974560    8534 fix.go:229] Guest: 2024-10-08 10:56:28.918847935 -0700 PDT Remote: 2024-10-08 10:56:28.908499 -0700 PDT m=+10.693740085 (delta=10.348935ms)
	I1008 10:56:28.974572    8534 fix.go:200] guest clock delta is within tolerance: 10.348935ms
	I1008 10:56:28.974575    8534 start.go:83] releasing machines lock for "running-upgrade-967000", held for 687.665625ms
	I1008 10:56:28.974669    8534 ssh_runner.go:195] Run: cat /version.json
	I1008 10:56:28.974669    8534 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 10:56:28.974680    8534 sshutil.go:53] new ssh client: &{IP:localhost Port:51233 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/running-upgrade-967000/id_rsa Username:docker}
	I1008 10:56:28.974693    8534 sshutil.go:53] new ssh client: &{IP:localhost Port:51233 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/running-upgrade-967000/id_rsa Username:docker}
	W1008 10:56:28.975272    8534 sshutil.go:64] dial failure (will retry): dial tcp [::1]:51233: connect: connection refused
	I1008 10:56:28.975294    8534 retry.go:31] will retry after 265.663688ms: dial tcp [::1]:51233: connect: connection refused
	W1008 10:56:29.274475    8534 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1008 10:56:29.274558    8534 ssh_runner.go:195] Run: systemctl --version
	I1008 10:56:29.276585    8534 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 10:56:29.278198    8534 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 10:56:29.278229    8534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1008 10:56:29.281407    8534 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1008 10:56:29.285742    8534 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 10:56:29.285749    8534 start.go:495] detecting cgroup driver to use...
	I1008 10:56:29.285824    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 10:56:29.291104    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1008 10:56:29.294412    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1008 10:56:29.297819    8534 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1008 10:56:29.297853    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1008 10:56:29.300999    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 10:56:29.303803    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1008 10:56:29.308018    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 10:56:29.311036    8534 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 10:56:29.314056    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1008 10:56:29.317198    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1008 10:56:29.320176    8534 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1008 10:56:29.323044    8534 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 10:56:29.326328    8534 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 10:56:29.329412    8534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:29.424476    8534 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1008 10:56:29.430607    8534 start.go:495] detecting cgroup driver to use...
	I1008 10:56:29.430688    8534 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1008 10:56:29.436482    8534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 10:56:29.442888    8534 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 10:56:29.449075    8534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 10:56:29.454285    8534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1008 10:56:29.462468    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 10:56:29.475289    8534 ssh_runner.go:195] Run: which cri-dockerd
	I1008 10:56:29.476566    8534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1008 10:56:29.479177    8534 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1008 10:56:29.484296    8534 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1008 10:56:29.583696    8534 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1008 10:56:29.672328    8534 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1008 10:56:29.672389    8534 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1008 10:56:29.677748    8534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:29.764069    8534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1008 10:56:32.233109    8534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.469028708s)
	I1008 10:56:32.233188    8534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1008 10:56:32.238484    8534 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1008 10:56:32.246613    8534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1008 10:56:32.253036    8534 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1008 10:56:32.341137    8534 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1008 10:56:32.429761    8534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:32.516493    8534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1008 10:56:32.524627    8534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1008 10:56:32.530209    8534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:32.603107    8534 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1008 10:56:32.650425    8534 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1008 10:56:32.650525    8534 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1008 10:56:32.653486    8534 start.go:563] Will wait 60s for crictl version
	I1008 10:56:32.653558    8534 ssh_runner.go:195] Run: which crictl
	I1008 10:56:32.655164    8534 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 10:56:32.666770    8534 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1008 10:56:32.666850    8534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1008 10:56:32.679899    8534 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1008 10:56:32.701887    8534 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1008 10:56:32.701975    8534 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1008 10:56:32.703458    8534 kubeadm.go:883] updating cluster {Name:running-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51326 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1008 10:56:32.703501    8534 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1008 10:56:32.703552    8534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1008 10:56:32.714438    8534 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1008 10:56:32.714447    8534 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1008 10:56:32.714480    8534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1008 10:56:32.717845    8534 ssh_runner.go:195] Run: which lz4
	I1008 10:56:32.719373    8534 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 10:56:32.720857    8534 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 10:56:32.720876    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1008 10:56:30.439828    8523 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1008 10:56:30.439916    8523 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1008 10:56:30.441200    8523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 10:56:30.445673    8523 kubeadm.go:883] updating cluster {Name:stopped-upgrade-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51227 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1008 10:56:30.445722    8523 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1008 10:56:30.445774    8523 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1008 10:56:30.457053    8523 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1008 10:56:30.457061    8523 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1008 10:56:30.457117    8523 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1008 10:56:30.460990    8523 ssh_runner.go:195] Run: which lz4
	I1008 10:56:30.462346    8523 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 10:56:30.463790    8523 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 10:56:30.463800    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1008 10:56:31.412723    8523 docker.go:649] duration metric: took 950.4175ms to copy over tarball
	I1008 10:56:31.412800    8523 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 10:56:32.615930    8523 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.203114791s)
	I1008 10:56:32.615947    8523 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 10:56:32.632263    8523 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1008 10:56:32.635817    8523 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1008 10:56:32.641578    8523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:32.714233    8523 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1008 10:56:33.699897    8534 docker.go:649] duration metric: took 980.562917ms to copy over tarball
	I1008 10:56:33.699975    8534 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 10:56:35.226344    8534 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.5263585s)
	I1008 10:56:35.226358    8534 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 10:56:35.243689    8534 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1008 10:56:35.249139    8534 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1008 10:56:35.256162    8534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:35.348130    8534 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1008 10:56:36.520078    8534 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.171933958s)
	I1008 10:56:36.520172    8534 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1008 10:56:36.531369    8534 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1008 10:56:36.531378    8534 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1008 10:56:36.531383    8534 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 10:56:36.535330    8534 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:36.537083    8534 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:36.539286    8534 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:36.539437    8534 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:36.541549    8534 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:36.541956    8534 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:36.543847    8534 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:36.543933    8534 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:36.544084    8534 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:36.545405    8534 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:36.546330    8534 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1008 10:56:36.546556    8534 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:36.548002    8534 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:36.548034    8534 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:36.549075    8534 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1008 10:56:36.550888    8534 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:37.023336    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:37.034409    8534 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1008 10:56:37.034436    8534 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:37.034484    8534 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:37.047582    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1008 10:56:37.059178    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:37.069379    8534 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1008 10:56:37.069408    8534 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:37.069466    8534 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:37.087848    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1008 10:56:37.104557    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:37.116370    8534 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1008 10:56:37.116390    8534 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:37.116447    8534 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:37.132206    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I1008 10:56:37.140792    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:37.153508    8534 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1008 10:56:37.153534    8534 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:37.153581    8534 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:37.164642    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1008 10:56:37.260591    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:37.275643    8534 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1008 10:56:37.275665    8534 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:37.275732    8534 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:37.281008    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1008 10:56:37.288187    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1008 10:56:37.288313    8534 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1008 10:56:37.292850    8534 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1008 10:56:37.292875    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I1008 10:56:37.293032    8534 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1008 10:56:37.293051    8534 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1008 10:56:37.293094    8534 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1008 10:56:37.365839    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1008 10:56:37.366017    8534 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	W1008 10:56:37.375485    8534 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1008 10:56:37.375628    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	W1008 10:56:37.381430    8534 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1008 10:56:37.381551    8534 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:37.394467    8534 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1008 10:56:37.394499    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1008 10:56:37.453777    8534 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1008 10:56:37.453803    8534 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:37.453866    8534 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1008 10:56:37.453876    8534 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:37.453878    8534 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:37.453910    8534 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:37.461744    8534 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1008 10:56:37.461775    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1008 10:56:37.509775    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1008 10:56:37.509927    8534 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1008 10:56:37.553698    8534 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1008 10:56:37.553835    8534 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1008 10:56:37.630692    8534 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1008 10:56:37.630745    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1008 10:56:37.630748    8534 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1008 10:56:37.630761    8534 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1008 10:56:37.630789    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1008 10:56:37.746169    8534 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1008 10:56:37.746212    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1008 10:56:38.066022    8534 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1008 10:56:38.066050    8534 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1008 10:56:38.066059    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1008 10:56:38.106861    8534 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1008 10:56:38.106886    8534 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1008 10:56:38.106893    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1008 10:56:34.565123    8523 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.850877667s)
	I1008 10:56:34.565239    8523 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1008 10:56:34.587126    8523 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1008 10:56:34.587137    8523 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1008 10:56:34.587157    8523 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 10:56:34.594483    8523 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:34.595048    8523 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:34.596938    8523 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:34.598676    8523 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:34.598904    8523 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:34.599247    8523 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:34.601399    8523 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:34.601508    8523 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1008 10:56:34.601444    8523 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:34.603357    8523 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:34.604448    8523 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:34.604467    8523 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1008 10:56:34.605608    8523 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:34.605627    8523 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:34.606792    8523 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:34.607970    8523 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:35.141027    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:35.149873    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:35.155703    8523 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1008 10:56:35.156150    8523 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:35.156215    8523 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:35.165881    8523 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1008 10:56:35.165923    8523 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:35.165995    8523 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:35.179936    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1008 10:56:35.180794    8523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1008 10:56:35.184316    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1008 10:56:35.185481    8523 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1008 10:56:35.185512    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W1008 10:56:35.215035    8523 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1008 10:56:35.215240    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:35.231823    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:35.253935    8523 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1008 10:56:35.253999    8523 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:35.254064    8523 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:35.282358    8523 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1008 10:56:35.282387    8523 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:35.282452    8523 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:35.291809    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1008 10:56:35.291973    8523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1008 10:56:35.298821    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1008 10:56:35.333080    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1008 10:56:35.333189    8523 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1008 10:56:35.333233    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1008 10:56:35.359450    8523 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1008 10:56:35.359487    8523 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1008 10:56:35.359556    8523 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1008 10:56:35.410739    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1008 10:56:35.410898    8523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1008 10:56:35.434910    8523 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1008 10:56:35.434939    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1008 10:56:35.439577    8523 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1008 10:56:35.439591    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1008 10:56:35.493037    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:35.545733    8523 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1008 10:56:35.545756    8523 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1008 10:56:35.545761    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1008 10:56:35.545769    8523 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1008 10:56:35.545788    8523 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:35.545847    8523 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:35.564161    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1008 10:56:35.567875    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:35.578557    8523 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1008 10:56:35.578579    8523 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1008 10:56:35.578595    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1008 10:56:35.579445    8523 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1008 10:56:35.579464    8523 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:35.579527    8523 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:35.720061    8523 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1008 10:56:35.720098    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W1008 10:56:36.020898    8523 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1008 10:56:36.021019    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:36.033649    8523 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1008 10:56:36.033673    8523 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:36.033749    8523 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:36.050069    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1008 10:56:36.050221    8523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1008 10:56:36.051707    8523 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1008 10:56:36.051719    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1008 10:56:36.084119    8523 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1008 10:56:36.084132    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1008 10:56:36.322868    8523 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1008 10:56:36.322914    8523 cache_images.go:92] duration metric: took 1.735751583s to LoadCachedImages
	W1008 10:56:36.323129    8523 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1008 10:56:36.323255    8523 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1008 10:56:36.323320    8523 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-810000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 10:56:36.323390    8523 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1008 10:56:36.341628    8523 cni.go:84] Creating CNI manager for ""
	I1008 10:56:36.341644    8523 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 10:56:36.341651    8523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 10:56:36.341667    8523 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-810000 NodeName:stopped-upgrade-810000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 10:56:36.341733    8523 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-810000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 10:56:36.341796    8523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1008 10:56:36.344993    8523 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 10:56:36.345032    8523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 10:56:36.347873    8523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1008 10:56:36.352785    8523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 10:56:36.358358    8523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1008 10:56:36.364395    8523 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1008 10:56:36.365845    8523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 10:56:36.370051    8523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:36.459508    8523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 10:56:36.467459    8523 certs.go:68] Setting up /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000 for IP: 10.0.2.15
	I1008 10:56:36.467472    8523 certs.go:194] generating shared ca certs ...
	I1008 10:56:36.467482    8523 certs.go:226] acquiring lock for ca certs: {Name:mkb70c9691d78e2ecd0076f3f0607577e8eefb22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:36.467750    8523 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.key
	I1008 10:56:36.467792    8523 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/proxy-client-ca.key
	I1008 10:56:36.467960    8523 certs.go:256] generating profile certs ...
	I1008 10:56:36.468089    8523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/client.key
	I1008 10:56:36.468105    8523 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.key.93a04a39
	I1008 10:56:36.468265    8523 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.crt.93a04a39 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1008 10:56:36.525454    8523 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.crt.93a04a39 ...
	I1008 10:56:36.525479    8523 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.crt.93a04a39: {Name:mk811a22ffd011f3d85e0fb59b6e1f5c93ef2a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:36.525769    8523 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.key.93a04a39 ...
	I1008 10:56:36.525774    8523 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.key.93a04a39: {Name:mk1ff52479bc6a11b5837c24228770ede08bc28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:36.525975    8523 certs.go:381] copying /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.crt.93a04a39 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.crt
	I1008 10:56:36.526113    8523 certs.go:385] copying /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.key.93a04a39 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.key
	I1008 10:56:36.526286    8523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/proxy-client.key
	I1008 10:56:36.526423    8523 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/6907.pem (1338 bytes)
	W1008 10:56:36.526447    8523 certs.go:480] ignoring /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/6907_empty.pem, impossibly tiny 0 bytes
	I1008 10:56:36.526453    8523 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca-key.pem (1679 bytes)
	I1008 10:56:36.526474    8523 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem (1078 bytes)
	I1008 10:56:36.526492    8523 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem (1123 bytes)
	I1008 10:56:36.526509    8523 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/key.pem (1679 bytes)
	I1008 10:56:36.526560    8523 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem (1708 bytes)
	I1008 10:56:36.527897    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 10:56:36.535729    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 10:56:36.550696    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 10:56:36.558383    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1008 10:56:36.565482    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 10:56:36.576011    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 10:56:36.584505    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 10:56:36.592625    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 10:56:36.600281    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/6907.pem --> /usr/share/ca-certificates/6907.pem (1338 bytes)
	I1008 10:56:36.607236    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem --> /usr/share/ca-certificates/69072.pem (1708 bytes)
	I1008 10:56:36.614361    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 10:56:36.622594    8523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 10:56:36.630522    8523 ssh_runner.go:195] Run: openssl version
	I1008 10:56:36.632497    8523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6907.pem && ln -fs /usr/share/ca-certificates/6907.pem /etc/ssl/certs/6907.pem"
	I1008 10:56:36.635624    8523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6907.pem
	I1008 10:56:36.637097    8523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:43 /usr/share/ca-certificates/6907.pem
	I1008 10:56:36.637136    8523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6907.pem
	I1008 10:56:36.638810    8523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6907.pem /etc/ssl/certs/51391683.0"
	I1008 10:56:36.641900    8523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69072.pem && ln -fs /usr/share/ca-certificates/69072.pem /etc/ssl/certs/69072.pem"
	I1008 10:56:36.645431    8523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69072.pem
	I1008 10:56:36.646842    8523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:43 /usr/share/ca-certificates/69072.pem
	I1008 10:56:36.646876    8523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69072.pem
	I1008 10:56:36.648496    8523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69072.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 10:56:36.651786    8523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 10:56:36.655071    8523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 10:56:36.656622    8523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:55 /usr/share/ca-certificates/minikubeCA.pem
	I1008 10:56:36.656664    8523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 10:56:36.659102    8523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 10:56:36.662411    8523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 10:56:36.664026    8523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 10:56:36.666141    8523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 10:56:36.668409    8523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 10:56:36.670496    8523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 10:56:36.672467    8523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 10:56:36.674190    8523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 10:56:36.676045    8523 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51227 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1008 10:56:36.676108    8523 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1008 10:56:36.686216    8523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 10:56:36.690066    8523 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 10:56:36.690075    8523 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 10:56:36.690115    8523 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 10:56:36.693719    8523 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 10:56:36.693947    8523 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-810000" does not appear in /Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:56:36.693967    8523 kubeconfig.go:62] /Users/jenkins/minikube-integration/19774-6384/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-810000" cluster setting kubeconfig missing "stopped-upgrade-810000" context setting]
	I1008 10:56:36.694142    8523 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/kubeconfig: {Name:mk301b17dd40bdbbbe99e75bcafc6142cf217159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:36.695379    8523 kapi.go:59] client config for stopped-upgrade-810000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/client.key", CAFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104a380f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 10:56:36.701213    8523 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 10:56:36.705083    8523 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-810000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1008 10:56:36.705091    8523 kubeadm.go:1160] stopping kube-system containers ...
	I1008 10:56:36.705139    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1008 10:56:36.716101    8523 docker.go:483] Stopping containers: [56f80cdf5031 5f436e794069 838f048371b1 723b63a1a7b2 b60901c3d729 e61fade57ee9 b706c818d35c fbfef5a53508]
	I1008 10:56:36.716187    8523 ssh_runner.go:195] Run: docker stop 56f80cdf5031 5f436e794069 838f048371b1 723b63a1a7b2 b60901c3d729 e61fade57ee9 b706c818d35c fbfef5a53508
	I1008 10:56:36.726640    8523 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 10:56:36.732196    8523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 10:56:36.735623    8523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 10:56:36.735634    8523 kubeadm.go:157] found existing configuration files:
	
	I1008 10:56:36.735704    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/admin.conf
	I1008 10:56:36.738560    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 10:56:36.738596    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 10:56:36.741638    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/kubelet.conf
	I1008 10:56:36.744941    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 10:56:36.744983    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 10:56:36.748365    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/controller-manager.conf
	I1008 10:56:36.751114    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 10:56:36.751154    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 10:56:36.753907    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/scheduler.conf
	I1008 10:56:36.756956    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 10:56:36.756992    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 10:56:36.759927    8523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 10:56:36.762709    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:36.787917    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:37.379812    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:37.527301    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:37.557569    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:37.588894    8523 api_server.go:52] waiting for apiserver process to appear ...
	I1008 10:56:37.588978    8523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 10:56:38.091042    8523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 10:56:38.284062    8534 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1008 10:56:38.284102    8534 cache_images.go:92] duration metric: took 1.752717292s to LoadCachedImages
	W1008 10:56:38.284157    8534 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1008 10:56:38.284166    8534 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1008 10:56:38.284227    8534 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-967000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 10:56:38.284310    8534 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1008 10:56:38.306702    8534 cni.go:84] Creating CNI manager for ""
	I1008 10:56:38.306716    8534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 10:56:38.306722    8534 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 10:56:38.306733    8534 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-967000 NodeName:running-upgrade-967000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 10:56:38.306820    8534 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-967000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 10:56:38.306899    8534 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1008 10:56:38.315080    8534 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 10:56:38.315158    8534 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 10:56:38.318740    8534 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1008 10:56:38.325400    8534 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 10:56:38.346138    8534 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1008 10:56:38.353280    8534 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1008 10:56:38.355267    8534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:38.443873    8534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 10:56:38.450810    8534 certs.go:68] Setting up /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000 for IP: 10.0.2.15
	I1008 10:56:38.450829    8534 certs.go:194] generating shared ca certs ...
	I1008 10:56:38.450842    8534 certs.go:226] acquiring lock for ca certs: {Name:mkb70c9691d78e2ecd0076f3f0607577e8eefb22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:38.451028    8534 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.key
	I1008 10:56:38.451068    8534 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/proxy-client-ca.key
	I1008 10:56:38.451074    8534 certs.go:256] generating profile certs ...
	I1008 10:56:38.451136    8534 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/client.key
	I1008 10:56:38.451156    8534 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.key.c7af3328
	I1008 10:56:38.451170    8534 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.crt.c7af3328 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1008 10:56:38.506191    8534 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.crt.c7af3328 ...
	I1008 10:56:38.506207    8534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.crt.c7af3328: {Name:mkcc83885ed6de6bc78b832de69b92f50e4770e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:38.506537    8534 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.key.c7af3328 ...
	I1008 10:56:38.506543    8534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.key.c7af3328: {Name:mk70ebdbfc3de979abf7675c67172a258f406809 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:38.506712    8534 certs.go:381] copying /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.crt.c7af3328 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.crt
	I1008 10:56:38.506823    8534 certs.go:385] copying /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.key.c7af3328 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.key
	I1008 10:56:38.506958    8534 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/proxy-client.key
	I1008 10:56:38.507093    8534 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/6907.pem (1338 bytes)
	W1008 10:56:38.507118    8534 certs.go:480] ignoring /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/6907_empty.pem, impossibly tiny 0 bytes
	I1008 10:56:38.507126    8534 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca-key.pem (1679 bytes)
	I1008 10:56:38.507148    8534 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem (1078 bytes)
	I1008 10:56:38.507165    8534 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem (1123 bytes)
	I1008 10:56:38.507184    8534 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/key.pem (1679 bytes)
	I1008 10:56:38.507222    8534 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem (1708 bytes)
	I1008 10:56:38.507570    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 10:56:38.519761    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 10:56:38.531297    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 10:56:38.544070    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1008 10:56:38.557182    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 10:56:38.566164    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 10:56:38.577380    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 10:56:38.589742    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 10:56:38.603984    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/6907.pem --> /usr/share/ca-certificates/6907.pem (1338 bytes)
	I1008 10:56:38.615702    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem --> /usr/share/ca-certificates/69072.pem (1708 bytes)
	I1008 10:56:38.628230    8534 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 10:56:38.635451    8534 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 10:56:38.647081    8534 ssh_runner.go:195] Run: openssl version
	I1008 10:56:38.650325    8534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6907.pem && ln -fs /usr/share/ca-certificates/6907.pem /etc/ssl/certs/6907.pem"
	I1008 10:56:38.658599    8534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6907.pem
	I1008 10:56:38.662341    8534 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:43 /usr/share/ca-certificates/6907.pem
	I1008 10:56:38.662380    8534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6907.pem
	I1008 10:56:38.666211    8534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6907.pem /etc/ssl/certs/51391683.0"
	I1008 10:56:38.672168    8534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69072.pem && ln -fs /usr/share/ca-certificates/69072.pem /etc/ssl/certs/69072.pem"
	I1008 10:56:38.675684    8534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69072.pem
	I1008 10:56:38.679334    8534 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:43 /usr/share/ca-certificates/69072.pem
	I1008 10:56:38.679383    8534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69072.pem
	I1008 10:56:38.681362    8534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69072.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 10:56:38.684687    8534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 10:56:38.688246    8534 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 10:56:38.693165    8534 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:55 /usr/share/ca-certificates/minikubeCA.pem
	I1008 10:56:38.693221    8534 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 10:56:38.695868    8534 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 10:56:38.704692    8534 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 10:56:38.710715    8534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 10:56:38.714038    8534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 10:56:38.716247    8534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 10:56:38.725659    8534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 10:56:38.730479    8534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 10:56:38.733306    8534 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 10:56:38.736022    8534 kubeadm.go:392] StartCluster: {Name:running-upgrade-967000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51326 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-967000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1008 10:56:38.736119    8534 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1008 10:56:38.754233    8534 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 10:56:38.763925    8534 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 10:56:38.763934    8534 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 10:56:38.764002    8534 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 10:56:38.768216    8534 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 10:56:38.768546    8534 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-967000" does not appear in /Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:56:38.768650    8534 kubeconfig.go:62] /Users/jenkins/minikube-integration/19774-6384/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-967000" cluster setting kubeconfig missing "running-upgrade-967000" context setting]
	I1008 10:56:38.769252    8534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/kubeconfig: {Name:mk301b17dd40bdbbbe99e75bcafc6142cf217159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:38.769670    8534 kapi.go:59] client config for running-upgrade-967000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/client.key", CAFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060880f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 10:56:38.770026    8534 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 10:56:38.773122    8534 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-967000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1008 10:56:38.773131    8534 kubeadm.go:1160] stopping kube-system containers ...
	I1008 10:56:38.773183    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1008 10:56:38.796676    8534 docker.go:483] Stopping containers: [dd87d47a322c 2d3a13ff6b91 7ae29440d84a a24cf16eff8a 1f931536e1ad 95eb0774c47b 3a415fbebc63 af880d816398 38f5e47e0e83 86d190fbb173 69180d4d04b0 cd190ed050a6 8484d7c0f593 cdc5f2e0c4f3 b3d318f8155e c99057fc3b4f cd5622ea9ada c84a40b214e0 c0ecc3779b41]
	I1008 10:56:38.796836    8534 ssh_runner.go:195] Run: docker stop dd87d47a322c 2d3a13ff6b91 7ae29440d84a a24cf16eff8a 1f931536e1ad 95eb0774c47b 3a415fbebc63 af880d816398 38f5e47e0e83 86d190fbb173 69180d4d04b0 cd190ed050a6 8484d7c0f593 cdc5f2e0c4f3 b3d318f8155e c99057fc3b4f cd5622ea9ada c84a40b214e0 c0ecc3779b41
	I1008 10:56:40.029245    8534 ssh_runner.go:235] Completed: docker stop dd87d47a322c 2d3a13ff6b91 7ae29440d84a a24cf16eff8a 1f931536e1ad 95eb0774c47b 3a415fbebc63 af880d816398 38f5e47e0e83 86d190fbb173 69180d4d04b0 cd190ed050a6 8484d7c0f593 cdc5f2e0c4f3 b3d318f8155e c99057fc3b4f cd5622ea9ada c84a40b214e0 c0ecc3779b41: (1.232394083s)
	I1008 10:56:40.029353    8534 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 10:56:40.111599    8534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 10:56:40.115326    8534 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5639 Oct  8 17:56 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Oct  8 17:56 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Oct  8 17:56 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Oct  8 17:56 /etc/kubernetes/scheduler.conf
	
	I1008 10:56:40.115370    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/admin.conf
	I1008 10:56:40.118686    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 10:56:40.118718    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 10:56:40.121763    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/kubelet.conf
	I1008 10:56:40.124653    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 10:56:40.124686    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 10:56:40.127614    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/controller-manager.conf
	I1008 10:56:40.130402    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 10:56:40.130434    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 10:56:40.133290    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/scheduler.conf
	I1008 10:56:40.136203    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1008 10:56:40.136237    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 10:56:40.139234    8534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 10:56:40.144100    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:40.167223    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:40.602116    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:40.858321    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:40.892835    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:40.916618    8534 api_server.go:52] waiting for apiserver process to appear ...
	I1008 10:56:40.916702    8534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 10:56:41.418814    8534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 10:56:41.918769    8534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 10:56:41.923085    8534 api_server.go:72] duration metric: took 1.006479791s to wait for apiserver process to appear ...
	I1008 10:56:41.923094    8534 api_server.go:88] waiting for apiserver healthz status ...
	I1008 10:56:41.923105    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:56:38.591031    8523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 10:56:38.596317    8523 api_server.go:72] duration metric: took 1.007425209s to wait for apiserver process to appear ...
	I1008 10:56:38.596330    8523 api_server.go:88] waiting for apiserver healthz status ...
	I1008 10:56:38.596340    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:56:46.925289    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:56:46.925370    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:56:43.599173    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:56:43.599207    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:56:51.926037    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:56:51.926101    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:56:48.600084    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:56:48.600171    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:56:56.926787    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:56:56.926833    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:56:53.601396    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:56:53.601498    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:01.927672    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:01.927755    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:56:58.602982    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:56:58.603036    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:06.928883    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:06.928968    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:03.604727    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:03.604816    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:11.930696    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:11.930752    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:08.606917    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:08.607021    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:16.932688    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:16.932730    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:13.609789    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:13.609868    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:21.935032    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:21.935074    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:18.611161    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:18.611188    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:26.937414    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:26.937439    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:23.613342    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:23.613381    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:31.939634    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:31.939657    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:28.615578    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:28.615606    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:36.941943    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:36.941975    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:33.617853    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:33.617899    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:41.944195    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:41.944357    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:57:41.966030    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:57:41.966134    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:57:41.979245    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:57:41.979332    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:57:41.990227    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:57:41.990305    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:57:42.003295    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:57:42.003384    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:57:42.013606    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:57:42.013676    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:57:42.026390    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:57:42.026479    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:57:42.037529    8534 logs.go:282] 0 containers: []
	W1008 10:57:42.037541    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:57:42.037607    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:57:42.048155    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:57:42.048176    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:57:42.048180    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:57:42.060794    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:57:42.060805    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:57:42.084283    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:57:42.084294    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:57:42.089047    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:57:42.089054    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:57:42.197653    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:57:42.197665    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:57:42.209308    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:57:42.209321    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:57:42.220969    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:57:42.220980    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:57:42.232586    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:57:42.232595    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:57:42.244610    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:57:42.244623    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:57:42.256311    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:57:42.256321    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:57:42.273555    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:57:42.273565    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:57:42.286153    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:57:42.286167    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:57:42.297524    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:57:42.297534    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:57:42.324169    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:57:42.324182    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:57:42.339831    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:57:42.339928    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:57:42.364656    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:57:42.364664    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:57:42.381236    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:57:42.381251    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:57:42.394602    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:57:42.394616    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:57:42.407070    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:57:42.407085    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:57:42.407110    8534 out.go:270] X Problems detected in kubelet:
	W1008 10:57:42.407114    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:57:42.407121    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:57:42.407124    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:57:42.407127    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:57:38.620164    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:38.620717    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:57:38.644189    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:57:38.644311    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:57:38.660240    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:57:38.660336    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:57:38.673567    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:57:38.673668    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:57:38.684578    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:57:38.684660    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:57:38.694926    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:57:38.695001    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:57:38.705482    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:57:38.705547    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:57:38.715946    8523 logs.go:282] 0 containers: []
	W1008 10:57:38.715961    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:57:38.716039    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:57:38.726318    8523 logs.go:282] 1 containers: [5333aa2337bc]
	I1008 10:57:38.726335    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:57:38.726345    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:57:38.740246    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:57:38.740257    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:57:38.756050    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:57:38.756060    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:57:38.776799    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:57:38.776811    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:57:38.793384    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:57:38.793393    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:57:38.810695    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:57:38.810705    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:57:38.918825    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:57:38.918837    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:57:38.932069    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:57:38.932080    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:57:38.943487    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:57:38.943503    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:57:38.970006    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:57:38.970017    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:57:38.981869    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:57:38.981879    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:57:39.011351    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:57:39.011359    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:57:39.015279    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:57:39.015285    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:57:39.032997    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:57:39.033008    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:57:39.051255    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:57:39.051266    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:57:39.069313    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:57:39.069324    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:57:41.582691    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:46.585474    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:46.585705    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:57:46.598689    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:57:46.598784    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:57:46.609209    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:57:46.609288    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:57:46.619339    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:57:46.619415    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:57:46.632928    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:57:46.633007    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:57:46.643373    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:57:46.643453    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:57:46.653511    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:57:46.653599    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:57:46.663360    8523 logs.go:282] 0 containers: []
	W1008 10:57:46.663371    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:57:46.663431    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:57:46.674115    8523 logs.go:282] 1 containers: [5333aa2337bc]
	I1008 10:57:46.674143    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:57:46.674153    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:57:46.703571    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:57:46.703582    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:57:46.722612    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:57:46.722625    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:57:46.738325    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:57:46.738336    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:57:46.763886    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:57:46.763901    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:57:46.774988    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:57:46.775000    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:57:46.788981    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:57:46.788995    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:57:46.800029    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:57:46.800039    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:57:46.814459    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:57:46.814469    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:57:46.832890    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:57:46.832901    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:57:46.852580    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:57:46.852590    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:57:46.856692    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:57:46.856698    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:57:46.894458    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:57:46.894470    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:57:46.908855    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:57:46.908867    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:57:46.920937    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:57:46.920948    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:57:46.940538    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:57:46.940552    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:57:52.411340    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:49.466879    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:57.413441    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:57.413631    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:57:57.437365    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:57:57.437483    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:57:57.453750    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:57:57.453841    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:57:57.466560    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:57:57.466640    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:57:57.477764    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:57:57.477849    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:57:57.488201    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:57:57.488281    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:57:57.498855    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:57:57.498954    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:57:57.512326    8534 logs.go:282] 0 containers: []
	W1008 10:57:57.512341    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:57:57.512410    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:57:57.523448    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:57:57.523464    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:57:57.523470    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:57:57.538148    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:57:57.538253    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:57:57.563891    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:57:57.563901    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:57:57.576727    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:57:57.576737    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:57:57.589164    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:57:57.589174    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:57:57.602367    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:57:57.602382    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:57:57.616017    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:57:57.616031    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:57:57.631298    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:57:57.631311    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:57:57.644533    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:57:57.644542    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:57:57.661657    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:57:57.661668    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:57:57.673057    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:57:57.673067    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:57:57.685286    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:57:57.685300    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:57:57.723293    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:57:57.723305    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:57:57.737365    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:57:57.737377    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:57:57.753416    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:57:57.753427    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:57:57.765087    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:57:57.765098    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:57:57.790070    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:57:57.790077    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:57:57.794850    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:57:57.794862    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:57:57.806238    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:57:57.806247    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:57:57.806272    8534 out.go:270] X Problems detected in kubelet:
	W1008 10:57:57.806276    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:57:57.806279    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:57:57.806282    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:57:57.806285    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:57:54.469449    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:54.469661    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:57:54.485085    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:57:54.485177    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:57:54.496866    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:57:54.496946    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:57:54.507658    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:57:54.507744    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:57:54.518756    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:57:54.518839    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:57:54.529256    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:57:54.529336    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:57:54.539890    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:57:54.539964    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:57:54.550119    8523 logs.go:282] 0 containers: []
	W1008 10:57:54.550130    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:57:54.550198    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:57:54.560593    8523 logs.go:282] 1 containers: [5333aa2337bc]
	I1008 10:57:54.560617    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:57:54.560621    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:57:54.572075    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:57:54.572086    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:57:54.598106    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:57:54.598118    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:57:54.612391    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:57:54.612406    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:57:54.626867    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:57:54.626882    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:57:54.639125    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:57:54.639136    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:57:54.656417    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:57:54.656427    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:57:54.677909    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:57:54.677920    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:57:54.707290    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:57:54.707300    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:57:54.711466    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:57:54.711473    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:57:54.724745    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:57:54.724761    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:57:54.735829    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:57:54.735841    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:57:54.747412    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:57:54.747426    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:57:54.783642    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:57:54.783656    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:57:54.797423    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:57:54.797432    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:57:54.811455    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:57:54.811464    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:57:57.331630    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:02.334130    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:02.334407    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:02.360316    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:02.360457    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:02.377834    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:02.377948    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:02.391376    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:02.391462    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:02.402912    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:02.402990    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:02.415081    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:02.415157    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:02.425065    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:02.425132    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:02.436142    8523 logs.go:282] 0 containers: []
	W1008 10:58:02.436157    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:02.436225    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:02.446086    8523 logs.go:282] 1 containers: [5333aa2337bc]
	I1008 10:58:02.446113    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:02.446120    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:02.460940    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:02.460951    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:02.474870    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:02.474879    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:02.493839    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:02.493851    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:02.514930    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:02.514941    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:02.526921    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:02.526933    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:02.537884    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:02.537896    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:02.564215    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:02.564223    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:02.579415    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:02.579428    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:02.593871    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:02.593883    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:02.622717    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:02.622728    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:58:02.640674    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:02.640690    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:02.656018    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:02.656033    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:02.673936    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:02.673948    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:02.686200    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:02.686212    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:02.690532    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:02.690539    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:07.810405    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:05.237869    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:12.812694    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:12.812893    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:12.830817    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:58:12.830942    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:12.844531    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:58:12.844605    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:12.855769    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:58:12.855850    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:12.866721    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:58:12.866809    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:12.877620    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:58:12.877699    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:12.888164    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:58:12.888245    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:12.901565    8534 logs.go:282] 0 containers: []
	W1008 10:58:12.901576    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:12.901636    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:12.912247    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:58:12.912273    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:58:12.912279    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:58:12.926458    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:58:12.926471    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:58:12.944711    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:58:12.944722    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:58:12.956416    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:58:12.956428    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:12.968935    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:58:12.968946    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:58:12.982128    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:58:12.982140    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:58:12.999568    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:58:12.999580    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:58:13.017063    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:58:13.017074    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:58:13.028445    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:58:13.028456    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:58:13.039975    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:58:13.039988    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:58:13.051061    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:58:13.051074    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:58:13.073131    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:13.073140    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:13.077273    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:13.077282    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:13.113826    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:58:13.113837    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:58:13.127677    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:58:13.127687    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:58:13.139195    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:13.139208    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:13.164616    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:13.164626    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:58:13.177537    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:13.177639    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:13.202821    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:13.202828    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:58:13.202852    8534 out.go:270] X Problems detected in kubelet:
	W1008 10:58:13.202856    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:13.202869    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:13.202873    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:13.202876    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:58:10.240509    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:10.240766    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:10.260413    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:10.260519    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:10.274929    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:10.275027    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:10.287079    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:10.287162    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:10.297724    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:10.297804    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:10.308882    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:10.308961    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:10.319517    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:10.319594    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:10.329862    8523 logs.go:282] 0 containers: []
	W1008 10:58:10.329875    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:10.329942    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:10.340695    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:58:10.340714    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:10.340720    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:10.377618    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:10.377631    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:10.390536    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:10.390546    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:10.405792    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:10.405804    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:58:10.423068    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:10.423078    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:10.427266    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:10.427273    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:10.441429    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:10.441442    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:10.452753    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:58:10.452764    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:58:10.464624    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:10.464638    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:10.476406    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:10.476416    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:10.497515    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:10.497528    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:10.510008    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:10.510022    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:10.528115    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:10.528125    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:10.557120    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:10.557130    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:10.570896    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:10.570908    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:10.590823    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:10.590834    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:10.602369    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:10.602380    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:13.128238    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:18.130037    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:18.130269    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:18.152549    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:18.152644    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:18.165682    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:18.165767    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:18.183019    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:18.183087    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:18.193742    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:18.193825    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:18.204308    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:18.204393    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:18.214448    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:18.214525    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:18.225107    8523 logs.go:282] 0 containers: []
	W1008 10:58:18.225118    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:18.225183    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:18.236243    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:58:18.236264    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:18.236270    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:23.207084    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:18.257654    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:18.257666    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:18.274532    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:58:18.274545    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:58:18.288623    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:18.288634    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:18.318185    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:18.318197    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:18.335622    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:18.335634    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:18.346691    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:18.346705    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:18.380895    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:18.380907    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:18.394101    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:18.394113    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:18.406651    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:18.406662    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:58:18.425514    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:18.425528    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:18.436700    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:18.436711    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:18.462673    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:18.462680    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:18.476682    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:18.476692    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:18.491315    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:18.491326    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:18.518823    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:18.518833    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:18.522915    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:18.522921    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:21.041365    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:28.209932    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:28.210259    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:26.043776    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:26.043960    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:26.056453    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:26.056542    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:26.067421    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:26.067496    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:26.077749    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:26.077827    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:26.090856    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:26.090935    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:26.101111    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:26.101185    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:26.112171    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:26.112255    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:26.122630    8523 logs.go:282] 0 containers: []
	W1008 10:58:26.122641    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:26.122705    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:26.132878    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:58:26.132896    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:26.132901    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:26.154392    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:58:26.154405    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:58:26.165538    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:26.165548    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:26.195181    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:26.195196    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:26.199767    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:26.199774    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:26.212268    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:26.212284    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:26.225160    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:26.225172    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:26.246653    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:26.246665    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:26.261136    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:26.261147    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:26.272780    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:26.272792    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:26.298293    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:26.298301    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:26.333097    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:26.333112    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:26.346448    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:26.346460    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:26.361382    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:26.361393    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:26.382539    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:26.382549    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:58:26.400831    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:26.400842    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:26.415556    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:26.415571    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:28.241324    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:58:28.241448    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:28.257929    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:58:28.258016    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:28.271534    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:58:28.271624    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:28.286091    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:58:28.286177    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:28.296890    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:58:28.296958    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:28.307450    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:58:28.307536    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:28.317426    8534 logs.go:282] 0 containers: []
	W1008 10:58:28.317437    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:28.317502    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:28.333047    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:58:28.333064    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:58:28.333069    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:58:28.345775    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:58:28.345790    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:58:28.357129    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:58:28.357145    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:58:28.375390    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:58:28.375405    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:58:28.387484    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:28.387495    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:58:28.402970    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:28.403067    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:28.427761    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:58:28.427769    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:58:28.441781    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:58:28.441796    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:58:28.456379    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:58:28.456389    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:58:28.468024    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:58:28.468039    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:58:28.479894    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:58:28.479907    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:58:28.492057    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:28.492067    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:28.533915    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:58:28.533930    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:58:28.546558    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:58:28.546571    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:58:28.558187    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:58:28.558197    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:58:28.569218    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:58:28.569234    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:28.581903    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:28.581917    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:28.586888    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:28.586895    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:28.613774    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:28.613784    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:58:28.613812    8534 out.go:270] X Problems detected in kubelet:
	W1008 10:58:28.613816    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:28.613819    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:28.613822    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:28.613825    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:58:28.935597    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:33.937895    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:33.938507    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:33.980291    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:33.980441    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:34.000290    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:34.000401    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:34.014447    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:34.014537    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:34.026344    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:34.026422    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:34.043421    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:34.043529    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:34.054024    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:34.054101    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:34.064446    8523 logs.go:282] 0 containers: []
	W1008 10:58:34.064457    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:34.064522    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:34.074930    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:58:34.074946    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:34.074953    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:34.079420    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:34.079428    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:34.096495    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:34.096509    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:34.120545    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:34.120564    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:34.135622    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:34.135637    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:58:34.153288    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:34.153302    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:34.174311    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:58:34.174326    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:58:34.186532    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:34.186543    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:34.199305    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:34.199315    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:34.213287    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:34.213298    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:34.225280    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:34.225292    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:34.236487    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:34.236498    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:34.266046    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:34.266055    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:34.302780    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:34.302793    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:34.317316    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:34.317331    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:34.328971    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:34.328985    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:34.350338    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:34.350348    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:36.864200    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:38.618050    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:41.866977    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:41.867275    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:41.894791    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:41.894930    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:41.912540    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:41.912642    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:41.925650    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:41.925732    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:41.939133    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:41.939214    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:41.949576    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:41.949657    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:41.961107    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:41.961189    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:41.971263    8523 logs.go:282] 0 containers: []
	W1008 10:58:41.971275    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:41.971341    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:41.981783    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:58:41.981801    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:41.981807    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:41.992785    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:41.992799    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:42.007707    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:42.007720    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:42.034166    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:42.034177    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:42.045880    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:42.045894    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:42.075021    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:42.075028    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:42.096494    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:42.096506    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:58:42.113881    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:42.113893    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:42.131149    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:42.131165    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:42.135182    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:42.135190    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:42.147703    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:42.147713    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:42.161752    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:42.161767    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:42.174068    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:42.174084    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:42.207698    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:42.207708    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:42.221979    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:42.221995    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:42.236869    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:42.236880    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:42.249677    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:58:42.249694    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:58:43.620816    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:43.620999    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:43.637851    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:58:43.637949    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:43.650874    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:58:43.650950    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:43.661519    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:58:43.661601    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:43.672005    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:58:43.672082    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:43.682803    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:58:43.682881    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:43.693954    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:58:43.694031    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:43.704541    8534 logs.go:282] 0 containers: []
	W1008 10:58:43.704553    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:43.704627    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:43.717467    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:58:43.717484    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:43.717490    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:58:43.731268    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:43.731371    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:43.756450    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:58:43.756461    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:58:43.767797    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:43.767808    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:43.791651    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:58:43.791659    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:58:43.805823    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:58:43.805833    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:58:43.817622    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:58:43.817634    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:58:43.829341    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:58:43.829353    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:58:43.840778    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:58:43.840789    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:58:43.852500    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:58:43.852511    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:58:43.865586    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:58:43.865597    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:58:43.880597    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:58:43.880613    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:43.892631    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:43.892645    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:43.896912    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:43.896919    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:43.931240    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:58:43.931253    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:58:43.945800    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:58:43.945809    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:58:43.958133    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:58:43.958144    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:58:43.975532    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:58:43.975543    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:58:43.986661    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:43.986673    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:58:43.986702    8534 out.go:270] X Problems detected in kubelet:
	W1008 10:58:43.986709    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:43.986714    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:43.986729    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:43.986732    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:58:44.763711    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:49.764521    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:49.764764    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:49.790207    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:49.790359    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:49.806575    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:49.806673    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:49.820354    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:49.820440    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:49.832511    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:49.832589    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:49.842997    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:49.843070    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:49.853611    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:49.853685    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:49.864111    8523 logs.go:282] 0 containers: []
	W1008 10:58:49.864127    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:49.864192    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:49.875186    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:58:49.875204    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:49.875210    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:49.889554    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:49.889566    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:49.903935    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:49.903947    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:49.915516    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:49.915527    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:49.920565    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:49.920572    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:49.957551    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:49.957562    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:49.972167    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:49.972178    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:49.983810    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:49.983820    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:50.009811    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:50.009829    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:50.041107    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:50.041130    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:50.054054    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:58:50.054069    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:58:50.070723    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:50.070738    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:50.082655    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:50.082666    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:50.095525    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:50.095540    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:50.117232    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:50.117243    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:50.131382    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:50.131397    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:58:50.149218    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:50.149228    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:52.668904    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:53.989565    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:57.670595    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:57.670753    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:57.687157    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:57.687249    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:57.700294    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:57.700382    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:57.711315    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:57.711386    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:57.722194    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:57.722278    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:57.733011    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:57.733093    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:57.744144    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:57.744220    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:57.754562    8523 logs.go:282] 0 containers: []
	W1008 10:58:57.754574    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:57.754646    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:57.767164    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:58:57.767183    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:57.767188    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:57.781198    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:57.781210    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:57.802324    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:57.802334    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:57.816883    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:57.816894    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:57.832211    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:57.832221    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:57.849545    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:57.849555    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:57.860541    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:57.860553    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:57.890078    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:57.890088    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:57.926842    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:57.926853    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:57.943972    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:57.943984    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:57.958949    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:57.958959    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:57.974124    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:58:57.974138    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:58:57.985534    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:57.985545    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:58.010993    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:58.011002    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:58.014922    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:58.014932    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:58.025947    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:58.025959    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:58.038330    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:58.038341    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:58:58.992030    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:58.992298    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:59.018828    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:58:59.018966    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:59.035753    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:58:59.035855    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:59.048871    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:58:59.048959    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:59.060032    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:58:59.060116    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:59.070582    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:58:59.070667    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:59.080907    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:58:59.080990    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:59.091101    8534 logs.go:282] 0 containers: []
	W1008 10:58:59.091111    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:59.091172    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:59.101435    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:58:59.101452    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:58:59.101459    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:58:59.121004    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:58:59.121017    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:59.134568    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:59.134579    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:58:59.150336    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:59.150435    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:59.175276    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:59.175283    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:59.212167    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:58:59.212177    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:58:59.225569    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:58:59.225581    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:58:59.236607    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:58:59.236620    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:58:59.248463    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:58:59.248478    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:58:59.260126    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:59.260138    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:59.264708    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:58:59.264716    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:58:59.276931    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:58:59.276941    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:58:59.288739    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:58:59.288751    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:58:59.300390    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:59.300399    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:59.325800    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:58:59.325809    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:58:59.341319    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:58:59.341330    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:58:59.355416    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:58:59.355427    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:58:59.376421    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:58:59.376437    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:58:59.387924    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:59.387936    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:58:59.387964    8534 out.go:270] X Problems detected in kubelet:
	W1008 10:58:59.387969    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:58:59.387977    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:58:59.388052    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:58:59.388087    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:59:00.558097    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:05.560783    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:05.561107    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:05.589389    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:59:05.589531    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:05.605186    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:59:05.605279    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:05.618104    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:59:05.618176    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:05.628971    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:59:05.629050    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:05.639382    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:59:05.639464    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:05.649961    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:59:05.650035    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:05.663061    8523 logs.go:282] 0 containers: []
	W1008 10:59:05.663074    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:05.663149    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:05.673467    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:59:05.673484    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:59:05.673489    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:59:05.687442    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:59:05.687453    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:59:05.704569    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:05.704580    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:59:05.732930    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:59:05.732943    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:59:05.744104    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:59:05.744116    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:59:05.765025    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:59:05.765035    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:59:05.780130    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:59:05.780141    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:59:05.794715    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:59:05.794726    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:59:05.805981    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:59:05.805997    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:59:05.823041    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:59:05.823050    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:59:05.834934    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:05.834945    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:05.858630    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:59:05.858637    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:59:05.870638    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:59:05.870649    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:59:05.885045    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:59:05.885057    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:59:05.896815    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:05.896825    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:05.901581    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:05.901588    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:05.937473    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:59:05.937485    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:09.392242    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:08.452611    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:14.392660    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:14.392855    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:14.407308    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:59:14.407405    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:14.418734    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:59:14.418818    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:14.429305    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:59:14.429384    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:14.440155    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:59:14.440230    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:14.450519    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:59:14.450600    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:14.461284    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:59:14.461363    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:14.472329    8534 logs.go:282] 0 containers: []
	W1008 10:59:14.472342    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:14.472415    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:14.482866    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:59:14.482882    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:59:14.482888    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:59:14.500433    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:59:14.500442    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:59:14.512378    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:14.512390    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:14.516683    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:59:14.516692    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:59:14.529721    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:59:14.529732    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:59:14.540633    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:59:14.540644    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:59:14.552456    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:14.552466    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:59:14.565741    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:59:14.565839    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:59:14.590337    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:59:14.590345    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:59:14.602340    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:59:14.602350    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:59:14.613391    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:59:14.613402    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:59:14.624758    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:59:14.624770    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:59:14.636964    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:14.636975    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:14.661908    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:59:14.661917    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:14.673905    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:14.673916    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:14.712516    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:59:14.712528    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:59:14.727386    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:59:14.727397    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:59:14.741711    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:59:14.741723    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:59:14.753727    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:59:14.753741    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:59:14.753767    8534 out.go:270] X Problems detected in kubelet:
	W1008 10:59:14.753771    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:59:14.753775    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:59:14.753779    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:59:14.753782    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:59:13.455302    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:13.455577    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:13.480995    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:59:13.481106    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:13.495408    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:59:13.495498    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:13.508148    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:59:13.508234    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:13.518962    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:59:13.519046    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:13.529499    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:59:13.529581    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:13.539702    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:59:13.539780    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:13.550156    8523 logs.go:282] 0 containers: []
	W1008 10:59:13.550169    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:13.550240    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:13.560673    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:59:13.560692    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:13.560700    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:13.599396    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:59:13.599408    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:59:13.613342    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:59:13.613355    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:13.625354    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:59:13.625365    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:59:13.648565    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:59:13.648578    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:59:13.662208    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:59:13.662220    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:59:13.677170    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:59:13.677184    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:59:13.691999    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:59:13.692010    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:59:13.703388    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:13.703400    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:59:13.731109    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:59:13.731117    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:59:13.742392    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:59:13.742403    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:59:13.760173    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:13.760183    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:13.784528    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:59:13.784538    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:59:13.797934    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:13.797949    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:13.802058    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:59:13.802065    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:59:13.819135    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:59:13.819144    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:59:13.838020    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:59:13.838032    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:59:16.353320    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:21.355581    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:21.355720    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:21.368053    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:59:21.368144    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:21.382824    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:59:21.382916    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:21.394000    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:59:21.394083    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:21.404550    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:59:21.404634    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:21.417715    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:59:21.417793    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:21.428504    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:59:21.428589    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:21.442852    8523 logs.go:282] 0 containers: []
	W1008 10:59:21.442863    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:21.442937    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:21.453263    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:59:21.453280    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:59:21.453285    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:59:21.467102    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:59:21.467112    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:59:21.484805    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:59:21.484817    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:59:21.496583    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:59:21.496594    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:21.509421    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:21.509432    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:21.545620    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:59:21.545636    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:59:21.560609    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:59:21.560624    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:59:21.577939    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:59:21.577950    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:59:21.589336    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:59:21.589348    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:59:21.604234    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:59:21.604243    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:59:21.615385    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:59:21.615396    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:59:21.629177    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:59:21.629192    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:59:21.648248    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:59:21.648258    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:59:21.669391    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:59:21.669399    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:59:21.693900    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:21.693916    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:21.719233    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:21.719240    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:59:21.748344    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:21.748354    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:24.757958    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:24.254638    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:29.760558    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:29.760661    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:29.772158    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:59:29.772239    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:29.782593    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:59:29.782662    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:29.792806    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:59:29.792888    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:29.803709    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:59:29.803781    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:29.814898    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:59:29.814984    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:29.825339    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:59:29.825417    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:29.835449    8534 logs.go:282] 0 containers: []
	W1008 10:59:29.835468    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:29.835537    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:29.846386    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:59:29.846405    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:59:29.846410    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:59:29.858448    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:59:29.858459    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:59:29.870242    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:59:29.870253    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:59:29.884495    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:59:29.884507    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:59:29.897056    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:59:29.897071    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:59:29.911404    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:29.911420    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:29.935318    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:59:29.935327    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:29.947853    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:29.947863    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:59:29.961911    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:59:29.962011    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:59:29.986813    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:59:29.986821    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:59:30.000285    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:59:30.000299    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:59:30.012188    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:59:30.012202    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:59:30.023183    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:59:30.023194    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:59:30.034362    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:59:30.034375    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:59:30.046699    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:30.046711    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:30.051520    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:30.051528    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:30.086295    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:59:30.086309    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:59:30.102483    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:59:30.102497    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:59:30.120048    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:59:30.120061    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:59:30.120087    8534 out.go:270] X Problems detected in kubelet:
	W1008 10:59:30.120092    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:59:30.120095    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:59:30.120099    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:59:30.120102    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:59:29.257163    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:29.257696    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:29.292271    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:59:29.292420    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:29.312147    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:59:29.312261    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:29.327067    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:59:29.327155    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:29.341329    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:59:29.341411    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:29.354953    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:59:29.355032    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:29.365693    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:59:29.365762    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:29.376634    8523 logs.go:282] 0 containers: []
	W1008 10:59:29.376645    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:29.376705    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:29.388106    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:59:29.388126    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:59:29.388131    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:59:29.411230    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:59:29.411242    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:59:29.427611    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:59:29.427622    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:59:29.445007    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:59:29.445017    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:59:29.463463    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:59:29.463479    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:59:29.475594    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:59:29.475604    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:29.491595    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:29.491606    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:29.495895    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:29.495903    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:29.531259    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:29.531276    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:59:29.559669    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:59:29.559677    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:59:29.578810    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:29.578821    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:29.604305    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:59:29.604312    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:59:29.616834    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:59:29.616845    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:59:29.635290    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:59:29.635301    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:59:29.646839    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:59:29.646850    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:59:29.658297    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:59:29.658306    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:59:29.669819    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:59:29.669831    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:59:32.185817    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:37.188267    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:37.188721    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:37.223389    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:59:37.223545    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:37.243769    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:59:37.243870    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:37.258561    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:59:37.258649    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:37.270582    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:59:37.270698    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:37.281179    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:59:37.281257    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:37.292054    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:59:37.292132    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:37.302189    8523 logs.go:282] 0 containers: []
	W1008 10:59:37.302203    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:37.302271    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:37.312857    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:59:37.312875    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:59:37.312880    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:59:37.327606    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:59:37.327618    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:59:37.345549    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:37.345560    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:37.370345    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:59:37.370353    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:59:37.388096    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:59:37.388107    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:59:37.404518    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:59:37.404530    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:59:37.428940    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:59:37.428950    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:59:37.446224    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:59:37.446237    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:59:37.458713    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:59:37.458726    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:59:37.475604    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:59:37.475621    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:59:37.487669    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:37.487685    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:59:37.516759    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:37.516770    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:37.521492    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:37.521498    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:37.555279    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:59:37.555291    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:59:37.575444    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:59:37.575457    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:59:37.595429    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:59:37.595440    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:59:37.607041    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:59:37.607057    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:40.124213    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:40.121259    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:45.127469    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:45.127577    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:45.140149    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 10:59:45.140229    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:45.152163    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 10:59:45.152240    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:45.163469    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 10:59:45.163550    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:45.174386    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 10:59:45.174467    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:45.186064    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 10:59:45.186136    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:45.198560    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 10:59:45.198635    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:45.210150    8534 logs.go:282] 0 containers: []
	W1008 10:59:45.210163    8534 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:45.210230    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:45.221517    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 10:59:45.221537    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:45.221542    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 10:59:45.236933    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:59:45.237034    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:59:45.262878    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:45.262905    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:45.267858    8534 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:45.267867    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:45.292500    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 10:59:45.292517    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 10:59:45.306865    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 10:59:45.306878    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 10:59:45.320664    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 10:59:45.320677    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 10:59:45.336341    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 10:59:45.336353    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 10:59:45.349095    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 10:59:45.349107    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 10:59:45.361126    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 10:59:45.361137    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 10:59:45.375466    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 10:59:45.375478    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 10:59:45.393782    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 10:59:45.393791    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 10:59:45.409935    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 10:59:45.409950    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 10:59:45.422544    8534 logs.go:123] Gathering logs for container status ...
	I1008 10:59:45.422556    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:45.435753    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:45.435768    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:45.481079    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 10:59:45.481093    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 10:59:45.493973    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 10:59:45.493983    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 10:59:45.511742    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 10:59:45.511754    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 10:59:45.526789    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:59:45.526799    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 10:59:45.526827    8534 out.go:270] X Problems detected in kubelet:
	W1008 10:59:45.526832    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 10:59:45.526887    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 10:59:45.526895    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 10:59:45.526900    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:59:45.123696    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:45.123905    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:45.137230    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:59:45.137325    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:45.148704    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:59:45.148789    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:45.160161    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:59:45.160238    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:45.172151    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:59:45.172233    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:45.183975    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:59:45.184057    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:45.195561    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:59:45.195647    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:45.206625    8523 logs.go:282] 0 containers: []
	W1008 10:59:45.206638    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:45.206712    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:45.218487    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:59:45.218507    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:59:45.218513    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:59:45.232101    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:59:45.232112    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:59:45.249639    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:45.249650    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:45.286805    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:59:45.286819    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:59:45.301866    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:59:45.301881    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:59:45.317625    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:59:45.317644    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:59:45.340326    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:59:45.340337    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:59:45.361622    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:45.361631    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:59:45.392558    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:45.392571    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:45.397706    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:45.397717    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:45.423607    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:59:45.423617    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:45.437715    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:59:45.437729    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:59:45.454163    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:59:45.454177    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:59:45.474099    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:59:45.474111    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:59:45.493045    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:59:45.493059    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:59:45.506568    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:59:45.506581    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:59:45.526337    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:59:45.526350    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:59:48.040046    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:53.042357    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:53.042550    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:53.057688    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:59:53.057791    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:53.068732    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:59:53.068812    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:53.079414    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:59:53.079497    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:53.089791    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:59:53.089883    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:53.099929    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:59:53.100012    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:53.110239    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:59:53.110315    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:53.120731    8523 logs.go:282] 0 containers: []
	W1008 10:59:53.120748    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:53.120813    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:53.130893    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:59:53.130910    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:53.130915    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:59:53.158386    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:59:53.158394    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:59:53.172929    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:59:53.172941    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:59:53.198529    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:59:53.198542    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:59:53.210569    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:59:53.210581    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:59:53.227731    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:59:53.227745    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:59:53.239122    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:59:53.239135    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:59:55.531013    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:53.252598    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:59:53.252610    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:59:53.266161    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:59:53.266172    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:59:53.277275    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:59:53.277287    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:59:53.295102    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:53.295112    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:53.330869    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:59:53.330881    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:59:53.346481    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:59:53.346492    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:59:53.363287    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:59:53.363297    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:59:53.375313    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:53.375325    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:53.398826    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:59:53.398834    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:53.410432    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:53.410441    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:55.916602    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:00.533535    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:00.534029    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:00.566688    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 11:00:00.566836    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:00.586685    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 11:00:00.586794    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:00.604877    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 11:00:00.604953    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:00.616275    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 11:00:00.616345    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:00.627119    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 11:00:00.627203    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:00.637750    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 11:00:00.637820    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:00.647588    8534 logs.go:282] 0 containers: []
	W1008 11:00:00.647599    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:00.647658    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:00.660000    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 11:00:00.660021    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:00.660028    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:00.664428    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:00.664436    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:00.699849    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 11:00:00.699859    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 11:00:00.712152    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 11:00:00.712166    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 11:00:00.731057    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 11:00:00.731070    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 11:00:00.743407    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:00.743420    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 11:00:00.759765    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 11:00:00.759864    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 11:00:00.785015    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 11:00:00.785025    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 11:00:00.800169    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 11:00:00.800182    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 11:00:00.816887    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 11:00:00.816899    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 11:00:00.828900    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:00:00.828912    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:00.841577    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 11:00:00.841590    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 11:00:00.854220    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 11:00:00.854232    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 11:00:00.867750    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 11:00:00.867762    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 11:00:00.878584    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 11:00:00.878598    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 11:00:00.891309    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 11:00:00.891320    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 11:00:00.902862    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 11:00:00.902875    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 11:00:00.914369    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:00.914381    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:00.939423    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 11:00:00.939436    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 11:00:00.939471    8534 out.go:270] X Problems detected in kubelet:
	W1008 11:00:00.939485    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 11:00:00.939498    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 11:00:00.939512    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 11:00:00.939516    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:00:00.918838    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:00.918942    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:00.932415    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 11:00:00.932508    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:00.943773    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 11:00:00.943877    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:00.955345    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 11:00:00.955436    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:00.966084    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 11:00:00.966170    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:00.976850    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 11:00:00.976939    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:00.987746    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 11:00:00.987840    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:00.998134    8523 logs.go:282] 0 containers: []
	W1008 11:00:00.998150    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:00.998228    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:01.011926    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 11:00:01.011944    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 11:00:01.011950    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 11:00:01.023560    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:01.023572    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:01.058107    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 11:00:01.058120    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 11:00:01.072159    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 11:00:01.072172    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 11:00:01.086570    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 11:00:01.086582    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 11:00:01.107678    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 11:00:01.107689    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 11:00:01.120507    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 11:00:01.120521    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 11:00:01.146130    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 11:00:01.146141    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 11:00:01.163475    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 11:00:01.163487    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 11:00:01.175862    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 11:00:01.175875    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 11:00:01.190056    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:01.190068    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:01.194355    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 11:00:01.194362    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 11:00:01.208181    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 11:00:01.208192    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 11:00:01.219676    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:01.219688    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:01.244777    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:00:01.244788    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:01.258773    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:01.258785    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:00:01.288550    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 11:00:01.288560    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 11:00:03.800406    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:10.943716    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:08.802897    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:08.803458    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:08.840340    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 11:00:08.840512    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:08.860780    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 11:00:08.860884    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:08.875365    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 11:00:08.875467    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:08.887530    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 11:00:08.887607    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:08.898229    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 11:00:08.898304    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:08.910288    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 11:00:08.910366    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:08.920442    8523 logs.go:282] 0 containers: []
	W1008 11:00:08.920454    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:08.920525    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:08.932206    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 11:00:08.932225    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:08.932233    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:08.978721    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 11:00:08.978733    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 11:00:08.993830    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 11:00:08.993841    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 11:00:09.015048    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 11:00:09.015059    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 11:00:09.027052    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:09.027064    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:09.051215    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:09.051223    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:09.055364    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 11:00:09.055372    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 11:00:09.068340    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 11:00:09.068351    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 11:00:09.089513    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 11:00:09.089523    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 11:00:09.106733    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:00:09.106743    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:09.118734    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:09.118745    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:00:09.147870    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 11:00:09.147881    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 11:00:09.162116    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 11:00:09.162127    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 11:00:09.176534    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 11:00:09.176547    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 11:00:09.192664    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 11:00:09.192678    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 11:00:09.204495    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 11:00:09.204509    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 11:00:09.221860    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 11:00:09.221873    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 11:00:11.734931    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:15.946180    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:15.946457    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:15.963337    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 11:00:15.963435    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:15.976690    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 11:00:15.976762    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:15.989372    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 11:00:15.989456    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:16.000239    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 11:00:16.000328    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:16.011005    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 11:00:16.011082    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:16.021796    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 11:00:16.021873    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:16.032788    8534 logs.go:282] 0 containers: []
	W1008 11:00:16.032801    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:16.032876    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:16.043358    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 11:00:16.043381    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 11:00:16.043387    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 11:00:16.056960    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 11:00:16.056973    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 11:00:16.074623    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 11:00:16.074634    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 11:00:16.086852    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:16.086863    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:16.110155    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 11:00:16.110163    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 11:00:16.124012    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 11:00:16.124022    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 11:00:16.137173    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 11:00:16.137188    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 11:00:16.148440    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 11:00:16.148455    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 11:00:16.159486    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 11:00:16.159496    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 11:00:16.171980    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:16.171992    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 11:00:16.187206    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 11:00:16.187308    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 11:00:16.212108    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:16.212114    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:16.216373    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:16.216380    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:16.253653    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 11:00:16.253667    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 11:00:16.275023    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 11:00:16.275036    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 11:00:16.292747    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 11:00:16.292762    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 11:00:16.306948    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 11:00:16.306958    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 11:00:16.319988    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:00:16.319999    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:16.332443    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 11:00:16.332457    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 11:00:16.332484    8534 out.go:270] X Problems detected in kubelet:
	W1008 11:00:16.332491    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 11:00:16.332494    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 11:00:16.332498    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 11:00:16.332501    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:00:16.737268    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:16.737608    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:16.763124    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 11:00:16.763254    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:16.778679    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 11:00:16.778768    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:16.805175    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 11:00:16.805271    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:16.817971    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 11:00:16.818055    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:16.829581    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 11:00:16.829667    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:16.843241    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 11:00:16.843327    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:16.853413    8523 logs.go:282] 0 containers: []
	W1008 11:00:16.853433    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:16.853496    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:16.863624    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 11:00:16.863643    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 11:00:16.863649    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 11:00:16.877385    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 11:00:16.877396    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 11:00:16.888250    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 11:00:16.888262    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 11:00:16.901013    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:16.901024    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:16.937533    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 11:00:16.937546    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 11:00:16.949584    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 11:00:16.949595    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 11:00:16.965018    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 11:00:16.965030    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 11:00:16.979501    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:16.979512    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:17.004104    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:00:17.004112    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:17.015888    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:17.015899    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:00:17.045609    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 11:00:17.045618    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 11:00:17.059856    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 11:00:17.059866    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 11:00:17.072921    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 11:00:17.072931    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 11:00:17.093559    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 11:00:17.093569    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 11:00:17.110529    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 11:00:17.110539    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 11:00:17.130091    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 11:00:17.130102    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 11:00:17.141859    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:17.141871    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:19.647904    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:26.333827    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:24.648728    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:24.649390    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:24.687018    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 11:00:24.687177    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:24.712397    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 11:00:24.712529    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:24.727766    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 11:00:24.727856    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:24.740364    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 11:00:24.740444    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:24.751673    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 11:00:24.751749    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:24.762753    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 11:00:24.762840    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:24.773375    8523 logs.go:282] 0 containers: []
	W1008 11:00:24.773388    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:24.773455    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:24.784400    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 11:00:24.784420    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 11:00:24.784425    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 11:00:24.795808    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 11:00:24.795819    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 11:00:24.817189    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 11:00:24.817200    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 11:00:24.832393    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 11:00:24.832405    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 11:00:24.849043    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 11:00:24.849055    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 11:00:24.863328    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 11:00:24.863338    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 11:00:24.876803    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:00:24.876819    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:24.889503    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:24.889514    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:00:24.918194    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:24.918203    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:24.956654    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 11:00:24.956666    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 11:00:24.968434    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 11:00:24.968443    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 11:00:24.980493    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:24.980505    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:24.984990    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 11:00:24.985000    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 11:00:25.000427    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 11:00:25.000438    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 11:00:25.025952    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:25.025963    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:25.048275    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 11:00:25.048283    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 11:00:25.062466    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 11:00:25.062480    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 11:00:27.582823    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:31.336146    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:31.336624    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:31.367606    8534 logs.go:282] 2 containers: [8b67cf751135 1f931536e1ad]
	I1008 11:00:31.367759    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:31.386885    8534 logs.go:282] 2 containers: [657e987b5c19 7ae29440d84a]
	I1008 11:00:31.386991    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:31.401333    8534 logs.go:282] 1 containers: [59194b01fdd4]
	I1008 11:00:31.401424    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:31.418241    8534 logs.go:282] 2 containers: [fd6a9159c349 dd87d47a322c]
	I1008 11:00:31.418327    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:31.436699    8534 logs.go:282] 1 containers: [8505aae35241]
	I1008 11:00:31.436774    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:31.451274    8534 logs.go:282] 2 containers: [dbbe609e90dc 03e4a316dfd2]
	I1008 11:00:31.451351    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:31.464569    8534 logs.go:282] 0 containers: []
	W1008 11:00:31.464581    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:31.464656    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:31.476344    8534 logs.go:282] 2 containers: [a3ca9e4bc759 44233732d9c0]
	I1008 11:00:31.476363    8534 logs.go:123] Gathering logs for kube-scheduler [dd87d47a322c] ...
	I1008 11:00:31.476369    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dd87d47a322c"
	I1008 11:00:31.489587    8534 logs.go:123] Gathering logs for storage-provisioner [44233732d9c0] ...
	I1008 11:00:31.489600    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 44233732d9c0"
	I1008 11:00:31.501279    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:31.501292    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:31.526696    8534 logs.go:123] Gathering logs for kube-controller-manager [03e4a316dfd2] ...
	I1008 11:00:31.526706    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 03e4a316dfd2"
	I1008 11:00:31.538455    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:00:31.538468    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:31.550921    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:31.550933    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1008 11:00:31.565033    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 11:00:31.565137    8534 logs.go:138] Found kubelet problem: Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 11:00:31.590415    8534 logs.go:123] Gathering logs for etcd [657e987b5c19] ...
	I1008 11:00:31.590422    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 657e987b5c19"
	I1008 11:00:31.604857    8534 logs.go:123] Gathering logs for etcd [7ae29440d84a] ...
	I1008 11:00:31.604866    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ae29440d84a"
	I1008 11:00:31.618279    8534 logs.go:123] Gathering logs for kube-scheduler [fd6a9159c349] ...
	I1008 11:00:31.618293    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd6a9159c349"
	I1008 11:00:31.630444    8534 logs.go:123] Gathering logs for kube-controller-manager [dbbe609e90dc] ...
	I1008 11:00:31.630454    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 dbbe609e90dc"
	I1008 11:00:31.648513    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:31.648523    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:31.652791    8534 logs.go:123] Gathering logs for kube-apiserver [1f931536e1ad] ...
	I1008 11:00:31.652796    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1f931536e1ad"
	I1008 11:00:31.665781    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:31.665790    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:31.700408    8534 logs.go:123] Gathering logs for kube-apiserver [8b67cf751135] ...
	I1008 11:00:31.700419    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8b67cf751135"
	I1008 11:00:31.714035    8534 logs.go:123] Gathering logs for coredns [59194b01fdd4] ...
	I1008 11:00:31.714046    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59194b01fdd4"
	I1008 11:00:31.725411    8534 logs.go:123] Gathering logs for kube-proxy [8505aae35241] ...
	I1008 11:00:31.725426    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8505aae35241"
	I1008 11:00:31.737169    8534 logs.go:123] Gathering logs for storage-provisioner [a3ca9e4bc759] ...
	I1008 11:00:31.737182    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a3ca9e4bc759"
	I1008 11:00:31.748457    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 11:00:31.748470    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1008 11:00:31.748503    8534 out.go:270] X Problems detected in kubelet:
	W1008 11:00:31.748508    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: W1008 17:56:28.929574    1952 reflector.go:324] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	W1008 11:00:31.748516    8534 out.go:270]   Oct 08 17:56:28 running-upgrade-967000 kubelet[1952]: E1008 17:56:28.929601    1952 reflector.go:138] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:running-upgrade-967000" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'running-upgrade-967000' and this object
	I1008 11:00:31.748519    8534 out.go:358] Setting ErrFile to fd 2...
	I1008 11:00:31.748522    8534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:00:32.585474    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:32.585675    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:32.602503    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 11:00:32.602608    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:32.616502    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 11:00:32.616581    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:32.627745    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 11:00:32.627829    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:32.638339    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 11:00:32.638421    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:32.649039    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 11:00:32.649110    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:32.660091    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 11:00:32.660173    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:32.670518    8523 logs.go:282] 0 containers: []
	W1008 11:00:32.670530    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:32.670601    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:32.681032    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 11:00:32.681051    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 11:00:32.681056    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 11:00:32.694043    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:32.694059    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:32.698513    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:32.698520    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:32.737412    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 11:00:32.737426    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 11:00:32.749059    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 11:00:32.749070    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 11:00:32.770049    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 11:00:32.770058    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 11:00:32.784744    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 11:00:32.784758    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 11:00:32.802817    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 11:00:32.802828    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 11:00:32.816793    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 11:00:32.816806    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 11:00:32.832290    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 11:00:32.832303    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 11:00:32.846837    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 11:00:32.846850    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 11:00:32.858425    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:32.858435    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:32.882442    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:00:32.882454    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:32.895061    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:32.895077    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:00:32.925645    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 11:00:32.925657    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 11:00:32.944792    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 11:00:32.944804    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 11:00:32.958021    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 11:00:32.958031    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 11:00:35.472555    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:40.474768    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:40.474853    8523 kubeadm.go:597] duration metric: took 4m3.78538525s to restartPrimaryControlPlane
	W1008 11:00:40.474931    8523 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 11:00:40.474964    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1008 11:00:41.506384    8523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.031410208s)
	I1008 11:00:41.506465    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 11:00:41.511885    8523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 11:00:41.514988    8523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 11:00:41.517611    8523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 11:00:41.517619    8523 kubeadm.go:157] found existing configuration files:
	
	I1008 11:00:41.517653    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/admin.conf
	I1008 11:00:41.520114    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 11:00:41.520148    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 11:00:41.523103    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/kubelet.conf
	I1008 11:00:41.526006    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 11:00:41.526037    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 11:00:41.528554    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/controller-manager.conf
	I1008 11:00:41.531529    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 11:00:41.531565    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 11:00:41.534836    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/scheduler.conf
	I1008 11:00:41.537593    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 11:00:41.537624    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 11:00:41.540220    8523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 11:00:41.558684    8523 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1008 11:00:41.558734    8523 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 11:00:41.608780    8523 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 11:00:41.608854    8523 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 11:00:41.608909    8523 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 11:00:41.661555    8523 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 11:00:41.665744    8523 out.go:235]   - Generating certificates and keys ...
	I1008 11:00:41.665778    8523 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 11:00:41.665807    8523 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 11:00:41.665852    8523 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 11:00:41.665887    8523 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 11:00:41.665921    8523 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 11:00:41.665957    8523 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 11:00:41.665993    8523 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 11:00:41.666028    8523 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 11:00:41.666068    8523 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 11:00:41.666107    8523 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 11:00:41.666127    8523 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 11:00:41.666164    8523 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 11:00:41.940328    8523 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 11:00:42.069304    8523 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 11:00:42.324133    8523 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 11:00:42.385795    8523 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 11:00:42.414169    8523 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 11:00:42.414512    8523 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 11:00:42.414534    8523 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 11:00:42.504203    8523 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 11:00:41.752611    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:42.508402    8523 out.go:235]   - Booting up control plane ...
	I1008 11:00:42.508533    8523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 11:00:42.508654    8523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 11:00:42.508703    8523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 11:00:42.509315    8523 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 11:00:42.510174    8523 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 11:00:46.754893    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:46.755050    8534 kubeadm.go:597] duration metric: took 4m7.99172875s to restartPrimaryControlPlane
	W1008 11:00:46.755152    8534 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 11:00:46.755204    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1008 11:00:47.788089    8534 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.03287275s)
	I1008 11:00:47.788178    8534 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 11:00:47.793367    8534 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 11:00:47.796591    8534 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 11:00:47.799462    8534 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 11:00:47.799466    8534 kubeadm.go:157] found existing configuration files:
	
	I1008 11:00:47.799494    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/admin.conf
	I1008 11:00:47.802292    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 11:00:47.802330    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 11:00:47.805107    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/kubelet.conf
	I1008 11:00:47.807739    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 11:00:47.807770    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 11:00:47.811048    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/controller-manager.conf
	I1008 11:00:47.814193    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 11:00:47.814220    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 11:00:47.816782    8534 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/scheduler.conf
	I1008 11:00:47.819584    8534 kubeadm.go:163] "https://control-plane.minikube.internal:51326" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51326 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 11:00:47.819607    8534 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 11:00:47.822770    8534 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 11:00:47.843327    8534 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1008 11:00:47.843363    8534 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 11:00:47.892895    8534 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 11:00:47.893045    8534 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 11:00:47.893144    8534 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 11:00:47.950053    8534 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 11:00:47.955672    8534 out.go:235]   - Generating certificates and keys ...
	I1008 11:00:47.955743    8534 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 11:00:47.955777    8534 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 11:00:47.955825    8534 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 11:00:47.955861    8534 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 11:00:47.955900    8534 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 11:00:47.956980    8534 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 11:00:47.957014    8534 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 11:00:47.957044    8534 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 11:00:47.957104    8534 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 11:00:47.957189    8534 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 11:00:47.957213    8534 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 11:00:47.957285    8534 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 11:00:48.074879    8534 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 11:00:48.203572    8534 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 11:00:47.512976    8523 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002445 seconds
	I1008 11:00:47.513032    8523 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 11:00:47.517592    8523 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 11:00:48.024928    8523 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 11:00:48.025029    8523 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-810000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 11:00:48.528953    8523 kubeadm.go:310] [bootstrap-token] Using token: y0p1cj.lce64642rwb74wr7
	I1008 11:00:48.312210    8534 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 11:00:48.462380    8534 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 11:00:48.494250    8534 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 11:00:48.494531    8534 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 11:00:48.494601    8534 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 11:00:48.587750    8534 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 11:00:48.533101    8523 out.go:235]   - Configuring RBAC rules ...
	I1008 11:00:48.533163    8523 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 11:00:48.533215    8523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 11:00:48.535027    8523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 11:00:48.540842    8523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 11:00:48.541923    8523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 11:00:48.543062    8523 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 11:00:48.547042    8523 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 11:00:48.729561    8523 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 11:00:48.934751    8523 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 11:00:48.935380    8523 kubeadm.go:310] 
	I1008 11:00:48.935482    8523 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 11:00:48.935499    8523 kubeadm.go:310] 
	I1008 11:00:48.935622    8523 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 11:00:48.935631    8523 kubeadm.go:310] 
	I1008 11:00:48.935668    8523 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 11:00:48.935760    8523 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 11:00:48.935830    8523 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 11:00:48.935843    8523 kubeadm.go:310] 
	I1008 11:00:48.935918    8523 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 11:00:48.935930    8523 kubeadm.go:310] 
	I1008 11:00:48.935992    8523 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 11:00:48.936002    8523 kubeadm.go:310] 
	I1008 11:00:48.936088    8523 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 11:00:48.936129    8523 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 11:00:48.936177    8523 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 11:00:48.936185    8523 kubeadm.go:310] 
	I1008 11:00:48.936229    8523 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 11:00:48.936270    8523 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 11:00:48.936278    8523 kubeadm.go:310] 
	I1008 11:00:48.936333    8523 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y0p1cj.lce64642rwb74wr7 \
	I1008 11:00:48.936407    8523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62e893a61543438a55113fac81ed4f49345f71ff8f12e8a170334491d7def86a \
	I1008 11:00:48.936421    8523 kubeadm.go:310] 	--control-plane 
	I1008 11:00:48.936423    8523 kubeadm.go:310] 
	I1008 11:00:48.936465    8523 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 11:00:48.936470    8523 kubeadm.go:310] 
	I1008 11:00:48.936517    8523 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y0p1cj.lce64642rwb74wr7 \
	I1008 11:00:48.936602    8523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62e893a61543438a55113fac81ed4f49345f71ff8f12e8a170334491d7def86a 
	I1008 11:00:48.936777    8523 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 11:00:48.936788    8523 cni.go:84] Creating CNI manager for ""
	I1008 11:00:48.936797    8523 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:00:48.940971    8523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 11:00:48.948854    8523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 11:00:48.952185    8523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 11:00:48.958569    8523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 11:00:48.958648    8523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 11:00:48.958842    8523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-810000 minikube.k8s.io/updated_at=2024_10_08T11_00_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=stopped-upgrade-810000 minikube.k8s.io/primary=true
	I1008 11:00:48.988824    8523 kubeadm.go:1113] duration metric: took 30.237708ms to wait for elevateKubeSystemPrivileges
	I1008 11:00:48.999481    8523 ops.go:34] apiserver oom_adj: -16
	I1008 11:00:48.999492    8523 kubeadm.go:394] duration metric: took 4m12.32408375s to StartCluster
	I1008 11:00:48.999504    8523 settings.go:142] acquiring lock: {Name:mk8a824673b36585a3cfee48bd81254259b5c84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:00:48.999690    8523 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:00:49.001124    8523 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/kubeconfig: {Name:mk301b17dd40bdbbbe99e75bcafc6142cf217159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:00:49.001446    8523 config.go:182] Loaded profile config "stopped-upgrade-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1008 11:00:49.001513    8523 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:00:49.001807    8523 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 11:00:49.001983    8523 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-810000"
	I1008 11:00:49.001986    8523 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-810000"
	I1008 11:00:49.001991    8523 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-810000"
	W1008 11:00:49.001994    8523 addons.go:243] addon storage-provisioner should already be in state true
	I1008 11:00:49.001994    8523 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-810000"
	I1008 11:00:49.002003    8523 host.go:66] Checking if "stopped-upgrade-810000" exists ...
	I1008 11:00:49.006026    8523 out.go:177] * Verifying Kubernetes components...
	I1008 11:00:49.006695    8523 kapi.go:59] client config for stopped-upgrade-810000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/client.key", CAFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104a380f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 11:00:49.010372    8523 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-810000"
	W1008 11:00:49.010382    8523 addons.go:243] addon default-storageclass should already be in state true
	I1008 11:00:49.010400    8523 host.go:66] Checking if "stopped-upgrade-810000" exists ...
	I1008 11:00:49.011232    8523 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 11:00:49.011238    8523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 11:00:49.011243    8523 sshutil.go:53] new ssh client: &{IP:localhost Port:51195 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/id_rsa Username:docker}
	I1008 11:00:49.012944    8523 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 11:00:48.591227    8534 out.go:235]   - Booting up control plane ...
	I1008 11:00:48.591433    8534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 11:00:48.592617    8534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 11:00:48.592925    8534 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 11:00:48.594432    8534 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 11:00:48.594543    8534 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 11:00:49.016973    8523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 11:00:49.021081    8523 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 11:00:49.021090    8523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 11:00:49.021096    8523 sshutil.go:53] new ssh client: &{IP:localhost Port:51195 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/id_rsa Username:docker}
	I1008 11:00:49.106963    8523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 11:00:49.113265    8523 api_server.go:52] waiting for apiserver process to appear ...
	I1008 11:00:49.113323    8523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 11:00:49.117394    8523 api_server.go:72] duration metric: took 115.870084ms to wait for apiserver process to appear ...
	I1008 11:00:49.117402    8523 api_server.go:88] waiting for apiserver healthz status ...
	I1008 11:00:49.117409    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:49.136958    8523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 11:00:49.199000    8523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 11:00:49.523268    8523 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 11:00:49.523281    8523 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 11:00:53.598595    8534 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.004099 seconds
	I1008 11:00:53.598766    8534 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 11:00:53.609484    8534 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 11:00:54.119775    8534 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 11:00:54.119880    8534 kubeadm.go:310] [mark-control-plane] Marking the node running-upgrade-967000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 11:00:54.625707    8534 kubeadm.go:310] [bootstrap-token] Using token: tlxz13.k2jch30i1blbq3wh
	I1008 11:00:54.629129    8534 out.go:235]   - Configuring RBAC rules ...
	I1008 11:00:54.629208    8534 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 11:00:54.629318    8534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 11:00:54.636321    8534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 11:00:54.637552    8534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 11:00:54.638810    8534 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 11:00:54.639972    8534 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 11:00:54.644109    8534 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 11:00:54.806113    8534 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 11:00:55.031183    8534 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 11:00:55.031999    8534 kubeadm.go:310] 
	I1008 11:00:55.032032    8534 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 11:00:55.032036    8534 kubeadm.go:310] 
	I1008 11:00:55.032075    8534 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 11:00:55.032081    8534 kubeadm.go:310] 
	I1008 11:00:55.032102    8534 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 11:00:55.032131    8534 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 11:00:55.032158    8534 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 11:00:55.032161    8534 kubeadm.go:310] 
	I1008 11:00:55.032198    8534 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 11:00:55.032202    8534 kubeadm.go:310] 
	I1008 11:00:55.032234    8534 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 11:00:55.032239    8534 kubeadm.go:310] 
	I1008 11:00:55.032267    8534 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 11:00:55.032308    8534 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 11:00:55.032346    8534 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 11:00:55.032350    8534 kubeadm.go:310] 
	I1008 11:00:55.032404    8534 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 11:00:55.032442    8534 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 11:00:55.032445    8534 kubeadm.go:310] 
	I1008 11:00:55.032483    8534 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tlxz13.k2jch30i1blbq3wh \
	I1008 11:00:55.032537    8534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62e893a61543438a55113fac81ed4f49345f71ff8f12e8a170334491d7def86a \
	I1008 11:00:55.032548    8534 kubeadm.go:310] 	--control-plane 
	I1008 11:00:55.032553    8534 kubeadm.go:310] 
	I1008 11:00:55.032612    8534 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 11:00:55.032618    8534 kubeadm.go:310] 
	I1008 11:00:55.032654    8534 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tlxz13.k2jch30i1blbq3wh \
	I1008 11:00:55.032705    8534 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62e893a61543438a55113fac81ed4f49345f71ff8f12e8a170334491d7def86a 
	I1008 11:00:55.032846    8534 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 11:00:55.032853    8534 cni.go:84] Creating CNI manager for ""
	I1008 11:00:55.032861    8534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:00:55.037137    8534 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 11:00:55.043274    8534 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 11:00:55.046271    8534 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 11:00:55.051396    8534 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 11:00:55.051453    8534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 11:00:55.051481    8534 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-967000 minikube.k8s.io/updated_at=2024_10_08T11_00_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=running-upgrade-967000 minikube.k8s.io/primary=true
	I1008 11:00:55.094747    8534 ops.go:34] apiserver oom_adj: -16
	I1008 11:00:55.094748    8534 kubeadm.go:1113] duration metric: took 43.339042ms to wait for elevateKubeSystemPrivileges
	I1008 11:00:55.094762    8534 kubeadm.go:394] duration metric: took 4m16.359387291s to StartCluster
	I1008 11:00:55.094772    8534 settings.go:142] acquiring lock: {Name:mk8a824673b36585a3cfee48bd81254259b5c84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:00:55.094856    8534 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:00:55.095279    8534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/kubeconfig: {Name:mk301b17dd40bdbbbe99e75bcafc6142cf217159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:00:55.095482    8534 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:00:55.095506    8534 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 11:00:55.095541    8534 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-967000"
	I1008 11:00:55.095549    8534 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-967000"
	W1008 11:00:55.095553    8534 addons.go:243] addon storage-provisioner should already be in state true
	I1008 11:00:55.095564    8534 host.go:66] Checking if "running-upgrade-967000" exists ...
	I1008 11:00:55.095577    8534 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-967000"
	I1008 11:00:55.095585    8534 config.go:182] Loaded profile config "running-upgrade-967000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1008 11:00:55.095592    8534 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-967000"
	I1008 11:00:55.097152    8534 kapi.go:59] client config for running-upgrade-967000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/running-upgrade-967000/client.key", CAFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1060880f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 11:00:55.097279    8534 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-967000"
	W1008 11:00:55.097284    8534 addons.go:243] addon default-storageclass should already be in state true
	I1008 11:00:55.097292    8534 host.go:66] Checking if "running-upgrade-967000" exists ...
	I1008 11:00:55.100145    8534 out.go:177] * Verifying Kubernetes components...
	I1008 11:00:55.100486    8534 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 11:00:55.104351    8534 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 11:00:55.104358    8534 sshutil.go:53] new ssh client: &{IP:localhost Port:51233 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/running-upgrade-967000/id_rsa Username:docker}
	I1008 11:00:55.108044    8534 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 11:00:55.112105    8534 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 11:00:55.116019    8534 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 11:00:55.116025    8534 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 11:00:55.116031    8534 sshutil.go:53] new ssh client: &{IP:localhost Port:51233 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/running-upgrade-967000/id_rsa Username:docker}
	I1008 11:00:55.185279    8534 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 11:00:55.190373    8534 api_server.go:52] waiting for apiserver process to appear ...
	I1008 11:00:55.190419    8534 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 11:00:55.194032    8534 api_server.go:72] duration metric: took 98.540083ms to wait for apiserver process to appear ...
	I1008 11:00:55.194042    8534 api_server.go:88] waiting for apiserver healthz status ...
	I1008 11:00:55.194050    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:55.208756    8534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 11:00:55.259872    8534 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 11:00:55.533302    8534 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 11:00:55.533314    8534 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 11:00:54.118554    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:54.118586    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:00.194359    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:00.194400    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:59.119511    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:59.119534    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:05.196129    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:05.196187    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:04.119736    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:04.119767    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:10.196417    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:10.196442    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:09.120042    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:09.120066    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:15.196758    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:15.196809    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:14.120458    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:14.120484    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:19.120984    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:19.121010    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1008 11:01:19.526216    8523 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1008 11:01:19.529135    8523 out.go:177] * Enabled addons: storage-provisioner
	I1008 11:01:20.197297    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:20.197331    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:19.541180    8523 addons.go:510] duration metric: took 30.539736959s for enable addons: enabled=[storage-provisioner]
	I1008 11:01:25.197875    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:25.197924    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1008 11:01:25.534325    8534 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1008 11:01:25.542584    8534 out.go:177] * Enabled addons: storage-provisioner
	I1008 11:01:25.550554    8534 addons.go:510] duration metric: took 30.45513075s for enable addons: enabled=[storage-provisioner]
	I1008 11:01:24.121718    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:24.121759    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:30.198751    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:30.198784    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:29.122569    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:29.122599    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:35.199722    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:35.199768    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:34.123635    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:34.123676    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:40.201167    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:40.201193    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:39.125095    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:39.125119    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:45.202751    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:45.202793    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:44.126734    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:44.126772    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:50.204672    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:50.204696    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:49.128858    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:49.128982    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:01:49.140015    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:01:49.140094    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:01:49.150971    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:01:49.151046    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:01:49.161720    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:01:49.161801    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:01:49.172876    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:01:49.172945    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:01:49.183709    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:01:49.183776    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:01:49.194823    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:01:49.194899    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:01:49.206414    8523 logs.go:282] 0 containers: []
	W1008 11:01:49.206429    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:01:49.206494    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:01:49.218771    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:01:49.218786    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:01:49.218792    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:01:49.236763    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:01:49.236774    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:01:49.247989    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:01:49.248002    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:01:49.259772    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:01:49.259786    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:01:49.264104    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:01:49.264110    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:01:49.279427    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:01:49.279438    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:01:49.293941    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:01:49.293952    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:01:49.304999    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:01:49.305008    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:01:49.324693    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:01:49.324704    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:01:49.348575    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:01:49.348587    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:01:49.382505    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:01:49.382513    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:01:49.420962    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:01:49.420974    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:01:49.433480    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:01:49.433494    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:01:51.950962    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:55.206817    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:55.206936    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:01:55.217637    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:01:55.217716    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:01:55.227730    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:01:55.227803    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:01:55.238890    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:01:55.238972    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:01:55.249226    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:01:55.249313    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:01:55.259792    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:01:55.259870    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:01:55.271689    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:01:55.271754    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:01:55.282125    8534 logs.go:282] 0 containers: []
	W1008 11:01:55.282136    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:01:55.282206    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:01:55.292978    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:01:55.292992    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:01:55.292998    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:01:55.327460    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:01:55.327467    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:01:55.331800    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:01:55.331808    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:01:55.369987    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:01:55.369999    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:01:55.384206    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:01:55.384219    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:01:55.399406    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:01:55.399420    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:01:55.417061    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:01:55.417072    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:01:55.428618    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:01:55.428630    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:01:55.443212    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:01:55.443226    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:01:55.454710    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:01:55.454724    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:01:55.466170    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:01:55.466182    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:01:55.480588    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:01:55.480599    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:01:55.492223    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:01:55.492238    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:01:58.017889    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:56.952035    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:56.952196    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:01:56.963748    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:01:56.963832    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:01:56.974593    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:01:56.974673    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:01:56.985820    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:01:56.985897    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:01:56.996767    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:01:56.996846    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:01:57.007321    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:01:57.007400    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:01:57.018088    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:01:57.018160    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:01:57.029090    8523 logs.go:282] 0 containers: []
	W1008 11:01:57.029103    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:01:57.029174    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:01:57.039674    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:01:57.039690    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:01:57.039696    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:01:57.053539    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:01:57.053551    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:01:57.064881    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:01:57.064893    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:01:57.082173    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:01:57.082184    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:01:57.094286    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:01:57.094301    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:01:57.111951    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:01:57.111963    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:01:57.146968    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:01:57.146980    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:01:57.151153    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:01:57.151162    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:01:57.165417    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:01:57.165427    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:01:57.178022    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:01:57.178036    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:01:57.190407    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:01:57.190422    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:01:57.214476    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:01:57.214485    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:01:57.225947    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:01:57.225964    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:03.020202    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:03.020450    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:03.036986    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:03.037088    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:03.049474    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:03.049560    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:03.064290    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:03.064361    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:03.074397    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:03.074471    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:03.087566    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:03.087653    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:03.097962    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:03.098042    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:03.108424    8534 logs.go:282] 0 containers: []
	W1008 11:02:03.108434    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:03.108493    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:03.119106    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:03.119124    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:03.119130    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:03.156231    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:03.156243    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:03.171736    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:03.171750    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:03.190293    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:03.190309    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:03.218605    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:03.218621    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:03.231198    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:03.231209    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:01:59.764630    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:03.243629    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:03.243642    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:03.257791    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:03.257802    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:03.292898    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:03.292914    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:03.297828    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:03.297836    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:03.312167    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:03.312183    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:03.326061    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:03.326071    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:03.337903    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:03.337915    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:05.851690    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:04.766971    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:04.767168    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:04.785610    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:04.785712    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:04.800323    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:04.800409    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:04.819783    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:04.819856    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:04.830417    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:04.830491    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:04.840820    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:04.840887    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:04.851678    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:04.851745    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:04.860998    8523 logs.go:282] 0 containers: []
	W1008 11:02:04.861007    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:04.861061    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:04.870960    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:04.870974    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:04.870983    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:04.884892    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:04.884903    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:04.922475    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:04.922483    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:04.926901    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:04.926909    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:04.938906    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:04.938921    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:04.954586    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:04.954595    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:04.966567    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:04.966579    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:04.984053    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:04.984074    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:05.022451    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:05.022466    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:05.037428    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:05.037442    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:05.051745    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:05.051756    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:05.063896    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:05.063908    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:05.087153    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:05.087163    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:07.601956    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:10.853976    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:10.854165    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:10.872381    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:10.872452    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:10.884946    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:10.885017    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:10.895464    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:10.895545    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:10.906048    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:10.906127    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:10.916240    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:10.916307    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:10.927047    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:10.927119    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:10.937275    8534 logs.go:282] 0 containers: []
	W1008 11:02:10.937287    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:10.937349    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:10.948049    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:10.948065    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:10.948070    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:10.985772    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:10.985784    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:11.005413    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:11.005424    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:11.018098    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:11.018109    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:02:11.031904    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:11.031915    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:11.043617    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:11.043629    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:11.055719    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:11.055729    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:11.092613    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:11.092622    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:11.097364    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:11.097371    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:11.110921    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:11.110931    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:11.122704    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:11.122715    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:11.141058    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:11.141068    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:11.158195    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:11.158211    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:12.603562    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:12.603770    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:12.622704    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:12.622810    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:12.636881    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:12.636956    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:12.647080    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:12.647163    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:12.658133    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:12.658216    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:12.669129    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:12.669214    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:12.679694    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:12.679774    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:12.689962    8523 logs.go:282] 0 containers: []
	W1008 11:02:12.689975    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:12.690033    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:12.700677    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:12.700692    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:12.700700    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:12.716056    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:12.716067    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:12.727494    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:12.727507    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:12.752711    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:12.752721    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:12.764357    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:12.764368    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:12.799335    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:12.799348    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:12.820501    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:12.820516    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:12.836610    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:12.836621    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:12.848606    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:12.848618    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:12.860361    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:12.860372    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:12.872318    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:12.872330    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:12.896050    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:12.896063    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:12.929856    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:12.929865    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:13.685136    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:15.436379    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:18.687157    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:18.687461    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:18.717230    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:18.717375    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:18.734902    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:18.735013    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:18.748966    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:18.749054    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:18.760738    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:18.760815    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:18.776433    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:18.776502    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:18.787239    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:18.787316    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:18.797231    8534 logs.go:282] 0 containers: []
	W1008 11:02:18.797243    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:18.797307    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:18.808660    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:18.808675    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:18.808680    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:18.820150    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:18.820162    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:18.832299    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:18.832311    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:18.846954    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:18.846964    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:18.885682    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:18.885698    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:18.900850    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:18.900863    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:18.915801    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:18.915813    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:18.937922    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:18.937933    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:18.950084    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:18.950097    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:18.974890    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:18.974899    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:18.986698    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:18.986712    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:19.021216    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:19.021224    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:19.025899    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:19.025906    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:02:21.538697    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:20.438667    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:20.438782    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:20.452143    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:20.452237    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:20.463630    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:20.463713    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:20.479240    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:20.479322    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:20.490467    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:20.490546    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:20.506431    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:20.506501    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:20.517124    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:20.517199    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:20.527512    8523 logs.go:282] 0 containers: []
	W1008 11:02:20.527526    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:20.527587    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:20.539486    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:20.539502    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:20.539510    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:20.553739    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:20.553753    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:20.565873    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:20.565887    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:20.582577    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:20.582590    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:20.594900    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:20.594909    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:20.619246    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:20.619252    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:20.655342    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:20.655354    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:20.694872    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:20.694884    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:20.709534    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:20.709547    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:20.721542    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:20.721554    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:20.733852    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:20.733863    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:20.738804    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:20.738814    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:20.754050    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:20.754060    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:26.540981    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:26.541155    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:26.554450    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:26.554522    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:26.565154    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:26.565235    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:26.575801    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:26.575880    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:26.586443    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:26.586508    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:26.596649    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:26.596724    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:26.607334    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:26.607401    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:26.617904    8534 logs.go:282] 0 containers: []
	W1008 11:02:26.617920    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:26.617987    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:26.628397    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:26.628432    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:26.628440    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:26.646793    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:26.646803    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:26.663889    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:26.663903    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:26.688189    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:26.688198    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:26.692873    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:26.692879    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:26.731118    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:26.731133    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:26.748949    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:26.748963    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:26.760168    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:26.760181    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:26.771882    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:26.771897    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:26.786540    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:26.786554    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:02:26.798780    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:26.798795    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:26.810588    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:26.810599    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:26.845010    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:26.845018    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:23.273110    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:29.359587    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:28.275298    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:28.275408    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:28.287090    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:28.287183    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:28.298621    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:28.298688    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:28.309224    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:28.309307    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:28.320534    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:28.320612    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:28.331439    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:28.331518    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:28.342235    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:28.342309    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:28.354059    8523 logs.go:282] 0 containers: []
	W1008 11:02:28.354073    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:28.354140    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:28.365023    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:28.365039    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:28.365044    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:28.377287    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:28.377300    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:28.393072    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:28.393087    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:28.418857    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:28.418864    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:28.455618    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:28.455636    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:28.459942    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:28.459950    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:28.472156    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:28.472167    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:28.485773    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:28.485784    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:28.502037    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:28.502047    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:28.539889    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:28.539906    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:28.554900    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:28.554911    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:28.569863    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:28.569880    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:28.588419    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:28.588451    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:31.102632    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:34.361883    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:34.362011    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:34.373595    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:34.373684    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:34.387878    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:34.387961    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:34.397988    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:34.398075    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:34.412950    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:34.413028    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:34.423379    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:34.423461    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:34.433850    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:34.433927    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:34.443866    8534 logs.go:282] 0 containers: []
	W1008 11:02:34.443875    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:34.443938    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:34.460199    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:34.460214    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:34.460220    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:34.471915    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:34.471928    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:34.483829    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:34.483844    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:34.519536    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:34.519544    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:34.555060    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:34.555073    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:34.569381    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:34.569392    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:34.581845    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:34.581855    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:34.597976    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:34.597985    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:02:34.619932    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:34.619943    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:34.638345    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:34.638355    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:34.662911    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:34.662918    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:34.667129    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:34.667138    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:34.685269    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:34.685285    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:37.197154    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:36.104946    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:36.105170    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:36.125721    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:36.125820    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:36.140630    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:36.140709    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:36.152770    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:36.152842    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:36.164012    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:36.164086    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:36.175232    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:36.175299    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:36.186472    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:36.186539    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:36.197617    8523 logs.go:282] 0 containers: []
	W1008 11:02:36.197633    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:36.197698    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:36.209508    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:36.209526    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:36.209532    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:36.214529    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:36.214537    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:36.227123    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:36.227135    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:36.252027    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:36.252037    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:36.264018    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:36.264027    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:36.280754    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:36.280767    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:36.319502    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:36.319513    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:36.340099    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:36.340111    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:36.375059    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:36.375067    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:36.412058    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:36.412074    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:36.427953    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:36.427963    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:36.442521    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:36.442537    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:36.455436    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:36.455447    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:42.198388    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:42.198832    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:42.228213    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:42.228360    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:42.250629    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:42.250719    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:42.263459    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:42.263543    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:42.276771    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:42.276849    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:42.287675    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:42.287757    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:42.298497    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:42.298572    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:42.318730    8534 logs.go:282] 0 containers: []
	W1008 11:02:42.318742    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:42.318802    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:42.329427    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:42.329443    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:42.329448    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:42.367156    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:42.367168    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:42.382665    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:42.382680    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:42.394969    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:42.394980    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:42.406969    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:42.406981    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:42.426021    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:42.426033    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:42.451507    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:42.451522    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:42.463088    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:42.463105    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:42.467861    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:42.467867    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:42.509532    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:42.509543    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:42.525807    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:42.525819    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:42.541516    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:42.541531    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:02:42.555308    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:42.555321    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:38.969429    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:45.069044    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:43.971654    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:43.971766    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:43.984625    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:43.984725    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:43.996549    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:43.996658    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:44.008211    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:44.008297    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:44.019726    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:44.019805    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:44.030980    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:44.031066    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:44.042204    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:44.042280    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:44.053228    8523 logs.go:282] 0 containers: []
	W1008 11:02:44.053238    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:44.053299    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:44.064061    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:44.064077    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:44.064083    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:44.076330    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:44.076343    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:44.110577    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:44.110586    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:44.125204    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:44.125216    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:44.137522    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:44.137533    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:44.149594    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:44.149605    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:44.161639    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:44.161651    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:44.174138    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:44.174149    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:44.199347    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:44.199355    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:44.203615    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:44.203626    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:44.241776    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:44.241790    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:44.256917    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:44.256931    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:44.273515    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:44.273526    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:46.794677    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:50.071243    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:50.071368    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:50.082078    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:50.082177    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:50.092741    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:50.092824    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:50.103049    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:50.103123    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:50.113767    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:50.113835    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:50.124038    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:50.124125    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:50.139254    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:50.139330    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:50.152988    8534 logs.go:282] 0 containers: []
	W1008 11:02:50.152999    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:50.153071    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:50.163495    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:50.163511    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:50.163516    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:02:50.175217    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:50.175228    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:50.186502    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:50.186512    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:50.212162    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:50.212173    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:50.217225    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:50.217231    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:50.229248    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:50.229262    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:50.244969    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:50.244983    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:50.259393    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:50.259406    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:50.272297    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:50.272306    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:50.295905    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:50.295915    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:50.307612    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:50.307627    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:50.344588    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:50.344603    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:50.392536    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:50.392550    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:52.909383    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:51.796867    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:51.797056    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:51.811850    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:51.811942    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:51.823709    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:51.823785    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:51.835170    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:51.835252    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:51.846361    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:51.846440    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:51.857696    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:51.857771    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:51.868403    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:51.868478    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:51.879195    8523 logs.go:282] 0 containers: []
	W1008 11:02:51.879207    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:51.879267    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:51.889998    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:51.890014    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:51.890020    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:51.904783    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:51.904798    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:51.920543    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:51.920555    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:51.934145    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:51.934159    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:51.951968    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:51.951980    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:51.964418    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:51.964429    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:51.999730    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:51.999737    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:52.003635    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:52.003641    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:52.043390    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:52.043402    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:52.067202    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:52.067215    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:52.081650    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:52.081664    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:52.093883    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:52.093894    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:52.118183    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:52.118192    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:57.911604    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:57.911883    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:57.935577    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:02:57.935690    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:57.951824    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:02:57.951919    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:57.964895    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:02:57.964977    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:57.976528    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:02:57.976605    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:57.987522    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:02:57.987598    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:57.998356    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:02:57.998441    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:58.011006    8534 logs.go:282] 0 containers: []
	W1008 11:02:58.011018    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:58.011083    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:58.022616    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:02:58.022631    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:58.022637    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:58.057603    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:58.057612    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:58.061723    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:58.061730    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:58.097297    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:02:58.097308    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:02:58.112256    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:02:58.112268    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:02:58.130075    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:02:58.130086    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:02:58.141776    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:02:58.141786    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:02:58.155850    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:02:58.155861    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:02:58.169395    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:02:58.169406    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:02:58.188760    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:02:58.188770    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:02:58.200266    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:02:58.200274    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:02:58.218608    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:58.218619    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:54.631204    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:58.241964    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:02:58.241975    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:00.756009    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:59.633380    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:59.633527    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:59.646498    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:59.646578    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:59.658514    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:59.658596    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:59.668939    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:59.669034    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:59.679450    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:59.679526    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:59.691378    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:59.691461    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:59.702402    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:59.702477    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:59.712670    8523 logs.go:282] 0 containers: []
	W1008 11:02:59.712685    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:59.712760    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:59.722996    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:59.723011    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:59.723018    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:59.737175    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:59.737189    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:59.748872    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:59.748884    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:59.763903    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:59.763912    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:59.776005    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:59.776020    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:59.800355    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:59.800363    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:59.814108    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:59.814118    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:59.818815    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:59.818823    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:59.854478    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:59.854490    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:59.868909    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:59.868919    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:59.888522    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:59.888535    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:59.914595    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:59.914613    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:59.961098    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:59.961116    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:02.505643    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:05.758251    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:05.758490    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:05.778693    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:03:05.778776    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:05.792781    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:03:05.792863    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:05.804686    8534 logs.go:282] 2 containers: [bb6761a3d1f5 0743e3bf710a]
	I1008 11:03:05.804769    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:05.815932    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:03:05.815997    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:05.826694    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:03:05.826779    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:05.837239    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:03:05.837317    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:05.847016    8534 logs.go:282] 0 containers: []
	W1008 11:03:05.847029    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:05.847095    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:05.857645    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:03:05.857659    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:05.857664    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:05.881185    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:05.881192    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:05.885917    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:03:05.885926    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:03:05.900897    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:03:05.900906    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:03:05.924129    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:03:05.924139    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:03:05.936367    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:03:05.936377    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:03:05.948542    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:03:05.948552    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:03:05.966078    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:05.966092    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:06.000780    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:06.000788    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:06.036391    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:03:06.036403    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:03:06.051176    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:03:06.051187    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:03:06.066126    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:03:06.066136    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:03:06.077508    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:03:06.077519    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:07.507337    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:07.507456    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:07.518110    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:03:07.518196    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:07.529634    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:03:07.529729    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:07.540471    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:03:07.540559    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:07.550634    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:03:07.550728    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:07.561470    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:03:07.561550    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:07.572305    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:03:07.572373    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:07.582554    8523 logs.go:282] 0 containers: []
	W1008 11:03:07.582566    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:07.582634    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:07.599687    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:03:07.599704    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:03:07.599710    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:03:07.611549    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:03:07.611565    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:07.626453    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:07.626466    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:07.632536    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:07.632547    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:07.666080    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:03:07.666095    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:03:07.680672    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:03:07.680686    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:03:07.694579    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:03:07.694593    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:03:07.706363    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:03:07.706379    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:03:07.718379    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:03:07.718390    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:03:07.729958    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:03:07.729970    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:03:07.742013    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:03:07.742025    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:03:07.755730    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:03:07.755741    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:03:07.773279    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:07.773290    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:07.809898    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:03:07.809907    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:03:07.824961    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:07.824972    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:08.591556    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:10.350782    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:13.593560    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:13.593651    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:13.608549    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:03:13.608636    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:13.619620    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:03:13.619707    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:13.630533    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:03:13.630610    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:13.641336    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:03:13.641415    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:13.652131    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:03:13.652203    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:13.663301    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:03:13.663374    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:13.673481    8534 logs.go:282] 0 containers: []
	W1008 11:03:13.673492    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:13.673557    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:13.683744    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:03:13.683761    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:03:13.683767    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:03:13.698647    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:03:13.698658    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:03:13.718025    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:03:13.718034    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:13.729774    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:03:13.729786    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:03:13.741652    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:03:13.741663    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:03:13.753502    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:13.753514    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:13.758111    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:03:13.758118    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:03:13.769326    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:03:13.769338    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:03:13.780793    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:13.780805    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:13.815368    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:13.815378    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:13.854623    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:03:13.854636    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:03:13.867902    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:13.867913    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:13.893421    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:03:13.893428    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:03:13.908046    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:03:13.908059    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:03:13.922527    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:03:13.922541    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:03:16.440283    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:15.352266    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:15.352439    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:15.368073    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:03:15.368170    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:15.381095    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:03:15.381180    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:15.392356    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:03:15.392439    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:15.403084    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:03:15.403167    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:15.415429    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:03:15.415510    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:15.426042    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:03:15.426127    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:15.436407    8523 logs.go:282] 0 containers: []
	W1008 11:03:15.436418    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:15.436488    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:15.446687    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:03:15.446712    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:03:15.446718    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:03:15.458285    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:03:15.458300    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:03:15.475970    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:03:15.475981    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:03:15.489836    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:03:15.489848    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:03:15.501958    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:03:15.501972    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:03:15.517866    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:03:15.517875    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:03:15.530425    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:15.530433    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:15.534752    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:15.534759    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:15.569971    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:03:15.569983    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:03:15.587471    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:03:15.587484    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:03:15.603094    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:15.603106    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:15.639508    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:03:15.639518    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:03:15.651972    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:15.651984    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:15.677432    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:03:15.677445    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:15.689857    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:03:15.689871    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:03:18.204241    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:21.442621    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:21.442872    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:21.466289    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:03:21.466424    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:21.483468    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:03:21.483548    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:21.496258    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:03:21.496345    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:21.507299    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:03:21.507379    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:21.522478    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:03:21.522550    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:21.533138    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:03:21.533216    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:21.544175    8534 logs.go:282] 0 containers: []
	W1008 11:03:21.544188    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:21.544253    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:21.554698    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:03:21.554717    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:03:21.554723    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:03:21.572850    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:21.572868    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:21.611720    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:03:21.611734    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:03:21.623691    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:03:21.623702    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:03:21.635699    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:03:21.635711    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:03:21.647929    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:03:21.647942    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:03:21.659569    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:21.659580    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:21.664316    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:03:21.664324    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:03:21.675842    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:03:21.675853    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:03:21.687761    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:21.687773    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:21.712161    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:03:21.712168    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:21.723932    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:21.723944    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:21.758803    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:03:21.758814    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:03:21.773580    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:03:21.773593    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:03:21.787434    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:03:21.787443    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:03:23.206526    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:23.206662    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:23.218963    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:03:23.219057    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:23.230204    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:03:23.230277    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:23.241481    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:03:23.241556    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:24.304870    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:23.256819    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:03:23.256894    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:23.268130    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:03:23.268210    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:23.278905    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:03:23.278987    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:23.289980    8523 logs.go:282] 0 containers: []
	W1008 11:03:23.289995    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:23.290061    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:23.300858    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:03:23.300879    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:23.300886    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:23.336883    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:03:23.336894    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:03:23.355597    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:03:23.355609    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:23.367495    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:23.367508    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:23.392809    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:03:23.392819    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:03:23.406997    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:03:23.407008    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:03:23.418583    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:03:23.418596    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:03:23.431194    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:03:23.431206    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:03:23.442742    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:23.442756    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:23.446940    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:03:23.446949    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:03:23.458038    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:03:23.458048    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:03:23.470091    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:03:23.470105    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:03:23.488726    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:23.488738    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:23.525044    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:03:23.525051    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:03:23.539934    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:03:23.539948    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:03:26.054383    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:29.307098    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:29.307269    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:29.323104    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:03:29.323203    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:29.335667    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:03:29.335755    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:29.346586    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:03:29.346669    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:29.356653    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:03:29.356732    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:29.368389    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:03:29.368469    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:29.378661    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:03:29.378739    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:29.388796    8534 logs.go:282] 0 containers: []
	W1008 11:03:29.388812    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:29.388884    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:29.399699    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:03:29.399714    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:29.399720    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:29.434216    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:29.434225    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:29.438550    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:03:29.438559    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:03:29.449669    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:03:29.449680    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:03:29.461139    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:03:29.461150    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:03:29.473260    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:29.473272    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:29.499391    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:03:29.499401    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:03:29.513774    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:03:29.513789    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:03:29.525486    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:03:29.525497    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:03:29.537151    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:03:29.537161    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:03:29.552397    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:03:29.552407    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:03:29.569679    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:03:29.569689    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:03:29.582100    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:03:29.582113    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:29.593519    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:29.593530    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:29.629931    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:03:29.629943    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:03:32.146291    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:31.056721    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:31.056897    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:31.069600    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:03:31.069682    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:31.081315    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:03:31.081396    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:31.092292    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:03:31.092373    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:31.102441    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:03:31.102521    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:31.113143    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:03:31.113221    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:31.131277    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:03:31.131351    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:31.141697    8523 logs.go:282] 0 containers: []
	W1008 11:03:31.141712    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:31.141784    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:31.159261    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:03:31.159279    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:03:31.159285    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:03:31.171328    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:03:31.171339    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:03:31.193261    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:03:31.193272    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:31.205471    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:03:31.205484    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:03:31.220363    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:03:31.220376    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:03:31.234130    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:31.234142    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:31.269908    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:03:31.269920    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:03:31.282221    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:31.282233    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:31.318239    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:31.318247    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:31.322719    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:03:31.322729    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:03:31.334470    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:03:31.334481    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:03:31.346125    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:03:31.346137    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:03:31.361039    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:31.361048    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:31.385204    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:03:31.385213    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:03:31.396397    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:03:31.396409    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:03:37.148599    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:37.148809    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:37.167688    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:03:37.167795    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:37.182120    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:03:37.182195    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:37.193201    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:03:37.193283    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:37.203843    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:03:37.203923    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:37.217169    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:03:37.217248    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:37.228193    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:03:37.228268    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:37.238718    8534 logs.go:282] 0 containers: []
	W1008 11:03:37.238728    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:37.238794    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:37.249391    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:03:37.249409    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:03:37.249414    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:03:37.264758    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:03:37.264769    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:03:37.277783    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:03:37.277795    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:03:37.297897    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:03:37.297909    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:03:37.309093    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:03:37.309104    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:03:37.324437    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:03:37.324449    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:37.335987    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:37.336001    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:37.370862    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:03:37.370876    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:03:37.385821    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:03:37.385831    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:03:37.397825    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:37.397838    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:37.422757    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:37.422765    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:37.458553    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:03:37.458568    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:03:37.470287    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:03:37.470299    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:03:37.490863    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:03:37.490878    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:03:37.508247    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:37.508257    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:33.909749    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:40.015125    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:38.912036    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:38.912185    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:38.926152    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:03:38.926240    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:38.937945    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:03:38.938027    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:38.948255    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:03:38.948344    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:38.959255    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:03:38.959335    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:38.972110    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:03:38.972180    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:38.984501    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:03:38.984583    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:38.994579    8523 logs.go:282] 0 containers: []
	W1008 11:03:38.994593    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:38.994655    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:39.005140    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:03:39.005159    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:39.005166    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:39.040445    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:03:39.040457    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:03:39.055334    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:03:39.055345    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:03:39.069413    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:03:39.069424    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:03:39.082656    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:03:39.082672    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:03:39.094542    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:39.094553    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:39.119820    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:03:39.119827    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:39.140043    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:39.140055    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:39.175715    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:39.175723    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:39.179877    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:03:39.179883    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:03:39.196910    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:03:39.196922    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:03:39.208273    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:03:39.208284    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:03:39.219688    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:03:39.219701    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:03:39.231698    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:03:39.231713    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:03:39.246073    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:03:39.246085    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:03:41.766720    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:45.017420    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:45.017595    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:45.031669    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:03:45.031769    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:45.050738    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:03:45.050824    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:45.062341    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:03:45.062414    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:45.072619    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:03:45.072700    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:45.083080    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:03:45.083163    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:45.099420    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:03:45.099495    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:45.112110    8534 logs.go:282] 0 containers: []
	W1008 11:03:45.112125    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:45.112194    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:45.123155    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:03:45.123174    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:03:45.123179    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:03:45.137472    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:03:45.137484    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:03:45.149570    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:03:45.149581    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:45.161028    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:45.161042    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:45.195465    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:03:45.195481    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:03:45.207717    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:03:45.207728    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:03:45.219751    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:45.219763    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:45.245767    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:45.245775    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:45.250140    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:03:45.250149    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:03:45.271288    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:03:45.271300    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:03:45.282982    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:03:45.282992    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:03:45.299754    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:03:45.299765    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:03:45.317400    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:45.317410    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:45.352198    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:03:45.352206    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:03:45.363382    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:03:45.363392    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:03:47.876550    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:46.767268    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:46.767457    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:46.778680    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:03:46.778760    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:46.788998    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:03:46.789076    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:46.803096    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:03:46.803180    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:46.813568    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:03:46.813633    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:46.824382    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:03:46.824448    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:46.834853    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:03:46.834932    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:46.848110    8523 logs.go:282] 0 containers: []
	W1008 11:03:46.848123    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:46.848189    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:46.858305    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:03:46.858322    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:46.858328    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:46.892511    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:03:46.892524    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:03:46.904331    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:03:46.904345    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:03:46.921933    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:03:46.921947    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:03:46.935948    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:03:46.935959    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:03:46.949184    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:03:46.949195    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:46.962478    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:03:46.962490    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:03:46.974492    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:03:46.974504    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:03:46.986340    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:03:46.986353    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:03:47.001237    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:03:47.001247    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:03:47.012931    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:03:47.012944    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:03:47.028211    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:03:47.028222    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:03:47.039604    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:47.039616    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:47.063016    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:47.063023    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:47.067076    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:47.067083    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:52.878851    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:52.878974    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:52.895478    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:03:52.895561    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:52.906808    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:03:52.906892    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:52.917788    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:03:52.917856    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:52.929019    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:03:52.929096    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:52.939599    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:03:52.939680    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:52.952141    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:03:52.952224    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:52.962842    8534 logs.go:282] 0 containers: []
	W1008 11:03:52.962854    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:52.962920    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:52.973315    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:03:52.973335    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:03:52.973339    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:03:52.985396    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:03:52.985408    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:03:53.002538    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:03:53.002552    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:03:53.017191    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:03:53.017208    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:03:53.034544    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:03:53.034555    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:03:53.048994    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:03:53.049004    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:03:53.062921    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:03:53.062932    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:03:53.075951    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:03:53.075963    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:03:53.087661    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:03:53.087672    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:53.099822    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:53.099834    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:53.104858    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:53.104867    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:53.128449    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:03:53.128457    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:03:53.140049    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:53.140059    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:53.175622    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:03:53.175638    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:03:53.187814    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:53.187824    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:49.607413    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:55.725182    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:54.609821    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:54.610347    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:54.653662    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:03:54.653829    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:54.674439    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:03:54.674575    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:54.689989    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:03:54.690082    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:54.702063    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:03:54.702151    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:54.713076    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:03:54.713156    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:54.724320    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:03:54.724402    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:54.736745    8523 logs.go:282] 0 containers: []
	W1008 11:03:54.736758    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:54.736829    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:54.752317    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:03:54.752335    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:54.752340    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:54.787006    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:54.787016    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:54.827240    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:03:54.827254    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:03:54.842774    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:03:54.842813    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:03:54.857075    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:03:54.857087    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:03:54.869712    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:03:54.869725    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:03:54.887718    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:54.887730    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:54.892480    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:03:54.892489    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:03:54.907557    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:03:54.907569    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:03:54.919858    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:03:54.919869    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:03:54.935696    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:03:54.935707    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:03:54.947656    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:03:54.947665    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:03:54.967110    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:03:54.967123    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:03:54.979515    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:54.979525    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:55.004941    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:03:55.004950    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:57.519335    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:00.726353    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:00.726635    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:00.749262    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:04:00.749366    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:00.775670    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:04:00.775755    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:00.787663    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:04:00.787749    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:00.798074    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:04:00.798152    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:00.809033    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:04:00.809124    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:00.824603    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:04:00.824672    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:00.834765    8534 logs.go:282] 0 containers: []
	W1008 11:04:00.834776    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:00.834843    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:00.845101    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:04:00.845118    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:00.845123    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:00.880054    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:04:00.880066    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:04:00.892462    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:00.892476    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:00.917075    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:00.917083    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:00.952583    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:04:00.952594    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:04:00.964649    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:04:00.964660    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:04:00.976437    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:04:00.976448    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:04:00.993912    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:04:00.993922    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:04:01.005564    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:04:01.005574    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:04:01.020798    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:01.020811    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:01.025863    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:04:01.025887    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:04:01.040510    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:04:01.040524    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:04:01.054832    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:04:01.054843    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:04:01.071672    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:04:01.071686    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:04:01.083388    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:04:01.083403    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:02.521670    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:02.521825    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:02.534352    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:04:02.534438    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:02.544882    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:04:02.544949    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:02.555725    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:04:02.555807    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:02.565999    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:04:02.566098    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:02.576796    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:04:02.576869    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:02.587892    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:04:02.587964    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:02.599203    8523 logs.go:282] 0 containers: []
	W1008 11:04:02.599215    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:02.599280    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:02.609611    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:04:02.609627    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:04:02.609635    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:04:02.623863    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:04:02.623873    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:04:02.637960    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:04:02.637971    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:02.649867    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:04:02.649879    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:04:02.661984    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:04:02.661995    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:04:02.674133    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:04:02.674145    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:04:02.686536    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:02.686547    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:02.711830    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:04:02.711838    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:04:02.723333    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:04:02.723345    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:04:02.740656    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:02.740666    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:02.776430    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:02.776444    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:02.784600    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:02.784614    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:02.823140    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:04:02.823152    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:04:02.845902    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:04:02.845916    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:04:02.858318    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:04:02.858330    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:04:03.597884    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:05.375422    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:08.599757    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:08.599984    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:08.615222    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:04:08.615308    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:08.627508    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:04:08.627590    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:08.638676    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:04:08.638755    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:08.649256    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:04:08.649322    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:08.659966    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:04:08.660042    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:08.670535    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:04:08.670608    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:08.681469    8534 logs.go:282] 0 containers: []
	W1008 11:04:08.681481    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:08.681541    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:08.692769    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:04:08.692786    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:04:08.692793    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:04:08.708004    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:04:08.708016    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:04:08.719795    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:04:08.719808    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:04:08.731964    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:04:08.731975    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:04:08.746550    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:04:08.746562    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:04:08.760231    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:04:08.760245    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:04:08.774836    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:04:08.774848    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:04:08.786683    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:04:08.786696    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:04:08.800450    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:04:08.800461    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:08.817086    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:08.817098    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:08.855228    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:04:08.855240    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:04:08.878268    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:04:08.878280    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:04:08.896050    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:08.896061    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:08.919988    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:08.919997    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:08.954833    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:08.954847    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:11.461351    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:10.377726    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:10.377898    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:10.392386    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:04:10.392483    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:10.403532    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:04:10.403609    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:10.413985    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:04:10.414067    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:10.428020    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:04:10.428104    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:10.438527    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:04:10.438593    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:10.449290    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:04:10.449370    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:10.463156    8523 logs.go:282] 0 containers: []
	W1008 11:04:10.463169    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:10.463238    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:10.473468    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:04:10.473486    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:04:10.473491    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:04:10.487381    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:04:10.487393    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:04:10.502311    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:04:10.502325    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:10.514407    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:04:10.514424    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:04:10.528569    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:04:10.528582    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:04:10.540406    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:04:10.540420    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:04:10.555699    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:04:10.555708    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:04:10.567089    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:10.567102    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:10.571367    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:10.571374    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:10.606231    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:04:10.606245    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:04:10.620659    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:10.620676    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:10.645832    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:10.645846    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:10.679477    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:04:10.679486    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:04:10.691422    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:04:10.691433    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:04:10.704162    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:04:10.704173    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:04:13.223593    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:16.463712    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:16.464164    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:16.497310    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:04:16.497465    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:16.517167    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:04:16.517268    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:16.532398    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:04:16.532480    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:16.543772    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:04:16.543849    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:16.554844    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:04:16.554925    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:16.565216    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:04:16.565299    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:16.576354    8534 logs.go:282] 0 containers: []
	W1008 11:04:16.576370    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:16.576441    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:16.587343    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:04:16.587364    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:16.587371    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:16.593001    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:04:16.593010    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:04:16.605039    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:16.605049    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:16.630298    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:04:16.630309    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:16.642349    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:16.642361    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:16.677494    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:16.677512    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:16.712240    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:04:16.712252    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:04:16.727117    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:04:16.727128    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:04:16.741538    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:04:16.741549    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:04:16.752858    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:04:16.752873    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:04:16.764871    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:04:16.764882    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:04:16.779682    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:04:16.779695    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:04:16.791128    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:04:16.791138    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:04:16.803370    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:04:16.803380    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:04:16.819037    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:04:16.819048    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:04:18.225881    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:18.226042    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:18.241985    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:04:18.242079    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:19.338849    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:18.254050    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:04:18.254132    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:18.264801    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:04:18.264882    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:18.276440    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:04:18.276517    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:18.286946    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:04:18.287016    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:18.297369    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:04:18.297450    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:18.307757    8523 logs.go:282] 0 containers: []
	W1008 11:04:18.307769    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:18.307833    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:18.318898    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:04:18.318917    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:04:18.318922    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:04:18.333752    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:18.333764    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:18.357490    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:18.357498    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:18.361940    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:04:18.361948    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:04:18.373547    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:04:18.373557    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:04:18.385803    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:04:18.385815    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:04:18.401513    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:04:18.401524    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:04:18.419388    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:18.419399    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:18.453982    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:04:18.453995    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:04:18.465833    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:18.465845    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:18.502285    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:04:18.502294    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:04:18.516601    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:04:18.516613    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:18.528950    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:04:18.528960    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:04:18.547385    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:04:18.547394    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:04:18.559278    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:04:18.559289    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:04:21.072853    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:24.341040    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:24.341157    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:24.356888    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:04:24.356973    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:24.367723    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:04:24.367800    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:24.378840    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:04:24.378923    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:24.389297    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:04:24.389375    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:24.399778    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:04:24.399851    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:24.410476    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:04:24.410553    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:24.420995    8534 logs.go:282] 0 containers: []
	W1008 11:04:24.421015    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:24.421080    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:24.432014    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:04:24.432035    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:24.432041    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:24.470034    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:04:24.470046    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:04:24.484556    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:24.484569    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:24.508461    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:24.508469    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:24.513514    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:04:24.513523    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:04:24.527922    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:04:24.527934    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:04:24.539914    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:04:24.539926    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:04:24.551109    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:04:24.551120    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:04:24.567969    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:04:24.567980    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:04:24.580170    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:24.580184    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:24.617666    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:04:24.617678    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:04:24.632851    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:04:24.632866    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:04:24.644867    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:04:24.644879    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:04:24.657076    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:04:24.657089    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:04:24.672419    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:04:24.672433    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:27.186434    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:26.075105    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:26.075230    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:26.090381    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:04:26.090458    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:26.100958    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:04:26.101039    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:26.111635    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:04:26.111717    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:26.122282    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:04:26.122361    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:26.132948    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:04:26.133022    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:26.143296    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:04:26.143377    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:26.153829    8523 logs.go:282] 0 containers: []
	W1008 11:04:26.153839    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:26.153903    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:26.164641    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:04:26.164657    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:04:26.164663    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:04:26.183625    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:04:26.183635    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:04:26.195088    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:26.195100    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:26.219576    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:26.219588    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:26.224170    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:04:26.224179    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:04:26.240304    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:04:26.240316    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:04:26.254710    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:04:26.254721    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:04:26.266434    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:26.266448    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:26.300499    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:26.300508    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:26.338105    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:04:26.338117    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:04:26.350239    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:04:26.350249    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:04:26.367389    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:04:26.367399    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:04:26.381377    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:04:26.381389    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:04:26.393317    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:04:26.393327    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:04:26.405286    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:04:26.405298    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:32.188732    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:32.188922    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:32.201623    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:04:32.201719    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:32.212169    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:04:32.212242    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:32.222921    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:04:32.222992    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:32.233981    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:04:32.234061    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:32.247566    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:04:32.247640    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:32.258535    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:04:32.258615    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:32.270577    8534 logs.go:282] 0 containers: []
	W1008 11:04:32.270591    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:32.270662    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:32.281500    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:04:32.281517    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:04:32.281522    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:04:32.296543    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:04:32.296554    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:04:32.311400    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:32.311412    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:32.347804    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:04:32.347814    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:04:32.359688    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:04:32.359699    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:04:32.381256    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:04:32.381266    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:04:32.393148    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:04:32.393160    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:04:32.404564    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:04:32.404575    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:04:32.416810    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:04:32.416820    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:04:32.429015    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:04:32.429026    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:04:32.440541    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:32.440556    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:32.464210    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:04:32.464218    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:32.477486    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:32.477498    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:32.512554    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:04:32.512566    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:04:32.526869    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:32.526880    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:28.919281    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:35.033439    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:33.921522    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:33.921666    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:33.936144    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:04:33.936236    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:33.950099    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:04:33.950180    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:33.960831    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:04:33.960914    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:33.971943    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:04:33.972017    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:33.982973    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:04:33.983041    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:33.993415    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:04:33.993485    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:34.003970    8523 logs.go:282] 0 containers: []
	W1008 11:04:34.004007    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:34.004077    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:34.020282    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:04:34.020300    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:04:34.020307    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:04:34.033108    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:34.033118    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:34.057493    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:34.057502    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:34.092156    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:34.092164    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:34.132805    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:04:34.132817    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:04:34.148469    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:04:34.148486    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:04:34.160549    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:04:34.160562    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:04:34.172305    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:04:34.172320    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:34.184263    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:04:34.184275    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:04:34.198636    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:04:34.198650    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:04:34.210996    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:04:34.211007    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:04:34.226574    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:04:34.226588    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:04:34.246888    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:34.246901    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:34.251415    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:04:34.251422    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:04:34.263842    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:04:34.263852    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:04:36.781562    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:39.980305    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:39.980553    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:39.996601    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:04:39.996704    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:40.008904    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:04:40.008976    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:40.019565    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:04:40.019647    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:40.029933    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:04:40.030007    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:40.041224    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:04:40.041323    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:40.052933    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:04:40.053028    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:40.067695    8534 logs.go:282] 0 containers: []
	W1008 11:04:40.067707    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:40.067776    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:40.078234    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:04:40.078253    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:04:40.078261    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:04:40.090083    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:04:40.090096    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:04:40.103210    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:04:40.103223    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:04:40.118085    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:40.118096    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:40.141040    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:04:40.141050    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:04:40.155336    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:04:40.155347    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:04:40.173583    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:04:40.173595    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:40.185391    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:04:40.185403    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:04:40.200553    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:04:40.200564    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:04:40.216564    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:04:40.216576    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:04:40.227896    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:40.227908    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:40.262320    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:40.262332    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:40.267042    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:04:40.267051    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:04:40.278452    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:04:40.278468    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:04:40.289551    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:40.289565    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:42.826131    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:41.728239    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:41.728547    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:41.755113    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:04:41.755259    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:41.777122    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:04:41.777225    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:41.789850    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:04:41.789934    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:41.801298    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:04:41.801367    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:41.811985    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:04:41.812062    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:41.822749    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:04:41.822826    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:41.832866    8523 logs.go:282] 0 containers: []
	W1008 11:04:41.832880    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:41.832934    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:41.843806    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:04:41.843823    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:04:41.843828    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:04:41.866377    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:04:41.866389    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:04:41.878669    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:04:41.878682    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:04:41.895532    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:04:41.895543    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:41.907561    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:41.907572    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:41.942311    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:04:41.942331    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:04:41.964985    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:04:41.965002    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:04:41.977425    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:04:41.977436    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:04:41.992484    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:04:41.992496    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:04:42.008632    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:04:42.008642    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:04:42.020609    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:04:42.020620    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:04:42.038555    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:04:42.038566    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:04:42.050189    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:42.050200    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:42.074472    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:42.074479    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:42.078754    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:42.078762    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:47.828294    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:47.828581    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:47.853239    8534 logs.go:282] 1 containers: [955e04ba9714]
	I1008 11:04:47.853375    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:47.870319    8534 logs.go:282] 1 containers: [9c9e0a8e03d8]
	I1008 11:04:47.870411    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:47.883241    8534 logs.go:282] 4 containers: [ad84b44e3ceb 36a64c2063d4 bb6761a3d1f5 0743e3bf710a]
	I1008 11:04:47.883325    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:47.894570    8534 logs.go:282] 1 containers: [62e3a70ad0ac]
	I1008 11:04:47.894643    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:47.904983    8534 logs.go:282] 1 containers: [4f4f43a5241a]
	I1008 11:04:47.905063    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:47.920198    8534 logs.go:282] 1 containers: [3b6861fabe36]
	I1008 11:04:47.920275    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:47.931254    8534 logs.go:282] 0 containers: []
	W1008 11:04:47.931264    8534 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:47.931333    8534 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:47.943552    8534 logs.go:282] 1 containers: [93de4b90a2ba]
	I1008 11:04:47.943571    8534 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:47.943576    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:47.980569    8534 logs.go:123] Gathering logs for coredns [bb6761a3d1f5] ...
	I1008 11:04:47.980581    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bb6761a3d1f5"
	I1008 11:04:47.993236    8534 logs.go:123] Gathering logs for kube-scheduler [62e3a70ad0ac] ...
	I1008 11:04:47.993248    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 62e3a70ad0ac"
	I1008 11:04:48.013026    8534 logs.go:123] Gathering logs for container status ...
	I1008 11:04:48.013041    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:48.026492    8534 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:48.026504    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:48.063927    8534 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:48.063938    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:48.068762    8534 logs.go:123] Gathering logs for etcd [9c9e0a8e03d8] ...
	I1008 11:04:48.068769    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c9e0a8e03d8"
	I1008 11:04:48.083420    8534 logs.go:123] Gathering logs for coredns [0743e3bf710a] ...
	I1008 11:04:48.083434    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0743e3bf710a"
	I1008 11:04:48.099626    8534 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:48.099641    8534 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:48.123710    8534 logs.go:123] Gathering logs for coredns [ad84b44e3ceb] ...
	I1008 11:04:48.123718    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ad84b44e3ceb"
	I1008 11:04:48.135901    8534 logs.go:123] Gathering logs for coredns [36a64c2063d4] ...
	I1008 11:04:48.135913    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 36a64c2063d4"
	I1008 11:04:48.147854    8534 logs.go:123] Gathering logs for kube-controller-manager [3b6861fabe36] ...
	I1008 11:04:48.147869    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3b6861fabe36"
	I1008 11:04:48.165802    8534 logs.go:123] Gathering logs for storage-provisioner [93de4b90a2ba] ...
	I1008 11:04:48.165814    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 93de4b90a2ba"
	I1008 11:04:48.178414    8534 logs.go:123] Gathering logs for kube-apiserver [955e04ba9714] ...
	I1008 11:04:48.178423    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 955e04ba9714"
	I1008 11:04:44.614753    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:49.617024    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:49.621449    8523 out.go:201] 
	W1008 11:04:49.625319    8523 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1008 11:04:49.625325    8523 out.go:270] * 
	W1008 11:04:49.626338    8523 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:04:49.637327    8523 out.go:201] 
	I1008 11:04:48.193657    8534 logs.go:123] Gathering logs for kube-proxy [4f4f43a5241a] ...
	I1008 11:04:48.193668    8534 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4f4f43a5241a"
	I1008 11:04:50.707414    8534 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:55.709471    8534 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:55.713953    8534 out.go:201] 
	W1008 11:04:55.718896    8534 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1008 11:04:55.718907    8534 out.go:270] * 
	W1008 11:04:55.719903    8534 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:04:55.731857    8534 out.go:201] 
	
	
	==> Docker <==
	-- Journal begins at Tue 2024-10-08 17:55:50 UTC, ends at Tue 2024-10-08 18:05:11 UTC. --
	Oct 08 18:04:56 running-upgrade-967000 dockerd[3111]: time="2024-10-08T18:04:56.562187493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 08 18:04:56 running-upgrade-967000 dockerd[3111]: time="2024-10-08T18:04:56.562323824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 08 18:04:56 running-upgrade-967000 dockerd[3111]: time="2024-10-08T18:04:56.562351615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 08 18:04:56 running-upgrade-967000 dockerd[3111]: time="2024-10-08T18:04:56.563989041Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4a4737431cb824a34fc9da2df69971ac8741a960030b0a92a3fc0f1c85da1831 pid=17785 runtime=io.containerd.runc.v2
	Oct 08 18:04:57 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:04:57Z" level=error msg="ContainerStats resp: {0x40008980c0 linux}"
	Oct 08 18:04:58 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:04:58Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 08 18:04:58 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:04:58Z" level=error msg="ContainerStats resp: {0x4000826ac0 linux}"
	Oct 08 18:04:58 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:04:58Z" level=error msg="ContainerStats resp: {0x40008c6cc0 linux}"
	Oct 08 18:04:58 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:04:58Z" level=error msg="ContainerStats resp: {0x40008c7000 linux}"
	Oct 08 18:04:58 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:04:58Z" level=error msg="ContainerStats resp: {0x4000827d00 linux}"
	Oct 08 18:04:58 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:04:58Z" level=error msg="ContainerStats resp: {0x400097e3c0 linux}"
	Oct 08 18:04:58 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:04:58Z" level=error msg="ContainerStats resp: {0x400097e940 linux}"
	Oct 08 18:04:58 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:04:58Z" level=error msg="ContainerStats resp: {0x400097f000 linux}"
	Oct 08 18:05:03 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:05:03Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 08 18:05:08 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:05:08Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Oct 08 18:05:08 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:05:08Z" level=error msg="ContainerStats resp: {0x4000970180 linux}"
	Oct 08 18:05:08 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:05:08Z" level=error msg="ContainerStats resp: {0x4000970140 linux}"
	Oct 08 18:05:09 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:05:09Z" level=error msg="ContainerStats resp: {0x4000971d40 linux}"
	Oct 08 18:05:10 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:05:10Z" level=error msg="ContainerStats resp: {0x4000898100 linux}"
	Oct 08 18:05:10 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:05:10Z" level=error msg="ContainerStats resp: {0x4000898540 linux}"
	Oct 08 18:05:10 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:05:10Z" level=error msg="ContainerStats resp: {0x400076f300 linux}"
	Oct 08 18:05:10 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:05:10Z" level=error msg="ContainerStats resp: {0x400076f7c0 linux}"
	Oct 08 18:05:10 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:05:10Z" level=error msg="ContainerStats resp: {0x400076fd40 linux}"
	Oct 08 18:05:10 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:05:10Z" level=error msg="ContainerStats resp: {0x4000899440 linux}"
	Oct 08 18:05:10 running-upgrade-967000 cri-dockerd[2953]: time="2024-10-08T18:05:10Z" level=error msg="ContainerStats resp: {0x40003a0d00 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	4a4737431cb82       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   a5073c1f77d28
	10a4a24def039       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   05606a7e2b50c
	ad84b44e3ceb3       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   05606a7e2b50c
	36a64c2063d44       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   a5073c1f77d28
	4f4f43a5241a3       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   5fca9d29a8253
	93de4b90a2ba3       66749159455b3       4 minutes ago       Running             storage-provisioner       0                   587c13ef27afb
	3b6861fabe36e       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   9e3629b5dfdc3
	9c9e0a8e03d88       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   e0c01106b1a59
	62e3a70ad0ac9       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   60c62542eddbf
	955e04ba9714d       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   29c79b6d19523
	
	
	==> coredns [10a4a24def03] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 5961514406028941732.626267931533837495. HINFO: read udp 10.244.0.3:40467->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5961514406028941732.626267931533837495. HINFO: read udp 10.244.0.3:46983->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 5961514406028941732.626267931533837495. HINFO: read udp 10.244.0.3:39385->10.0.2.3:53: i/o timeout
	
	
	==> coredns [36a64c2063d4] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2077267087212587095.2105906110493741915. HINFO: read udp 10.244.0.2:60305->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2077267087212587095.2105906110493741915. HINFO: read udp 10.244.0.2:37327->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2077267087212587095.2105906110493741915. HINFO: read udp 10.244.0.2:40806->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2077267087212587095.2105906110493741915. HINFO: read udp 10.244.0.2:56131->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [4a4737431cb8] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4355881768299922769.6234718504509290286. HINFO: read udp 10.244.0.2:51226->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4355881768299922769.6234718504509290286. HINFO: read udp 10.244.0.2:50247->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4355881768299922769.6234718504509290286. HINFO: read udp 10.244.0.2:45164->10.0.2.3:53: i/o timeout
	
	
	==> coredns [ad84b44e3ceb] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1573818113748212815.5313272141722691178. HINFO: read udp 10.244.0.3:43209->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1573818113748212815.5313272141722691178. HINFO: read udp 10.244.0.3:41855->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1573818113748212815.5313272141722691178. HINFO: read udp 10.244.0.3:43451->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1573818113748212815.5313272141722691178. HINFO: read udp 10.244.0.3:34925->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1573818113748212815.5313272141722691178. HINFO: read udp 10.244.0.3:52343->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1573818113748212815.5313272141722691178. HINFO: read udp 10.244.0.3:51269->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1573818113748212815.5313272141722691178. HINFO: read udp 10.244.0.3:38894->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1573818113748212815.5313272141722691178. HINFO: read udp 10.244.0.3:36880->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1573818113748212815.5313272141722691178. HINFO: read udp 10.244.0.3:47572->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1573818113748212815.5313272141722691178. HINFO: read udp 10.244.0.3:50535->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-967000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-967000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e
	                    minikube.k8s.io/name=running-upgrade-967000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_08T11_00_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 08 Oct 2024 18:00:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-967000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 08 Oct 2024 18:05:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 08 Oct 2024 18:00:54 +0000   Tue, 08 Oct 2024 18:00:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 08 Oct 2024 18:00:54 +0000   Tue, 08 Oct 2024 18:00:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 08 Oct 2024 18:00:54 +0000   Tue, 08 Oct 2024 18:00:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 08 Oct 2024 18:00:54 +0000   Tue, 08 Oct 2024 18:00:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-967000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 d5ae794dc39543c895f4919b81f421d7
	  System UUID:                d5ae794dc39543c895f4919b81f421d7
	  Boot ID:                    207b75df-73cd-4cb6-b46f-2ef01cc124d6
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-b7tcm                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 coredns-6d4b75cb6d-kbmml                          100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m4s
	  kube-system                 etcd-running-upgrade-967000                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m18s
	  kube-system                 kube-apiserver-running-upgrade-967000             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-controller-manager-running-upgrade-967000    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-mkbfk                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-967000             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m3s   kube-proxy       
	  Normal  NodeReady                4m18s  kubelet          Node running-upgrade-967000 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m18s  kubelet          Node running-upgrade-967000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s  kubelet          Node running-upgrade-967000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s  kubelet          Node running-upgrade-967000 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m18s  kubelet          Starting kubelet.
	  Normal  RegisteredNode           4m5s   node-controller  Node running-upgrade-967000 event: Registered Node running-upgrade-967000 in Controller
	
	
	==> dmesg <==
	[  +1.798174] systemd-fstab-generator[875]: Ignoring "noauto" for root device
	[  +0.058131] systemd-fstab-generator[886]: Ignoring "noauto" for root device
	[  +0.061022] systemd-fstab-generator[897]: Ignoring "noauto" for root device
	[  +1.138269] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.082170] systemd-fstab-generator[1045]: Ignoring "noauto" for root device
	[  +0.066200] systemd-fstab-generator[1056]: Ignoring "noauto" for root device
	[  +2.516154] systemd-fstab-generator[1284]: Ignoring "noauto" for root device
	[  +9.628462] systemd-fstab-generator[1946]: Ignoring "noauto" for root device
	[ +12.808311] systemd-fstab-generator[2297]: Ignoring "noauto" for root device
	[  +0.155157] systemd-fstab-generator[2331]: Ignoring "noauto" for root device
	[  +0.090509] systemd-fstab-generator[2342]: Ignoring "noauto" for root device
	[  +0.091758] systemd-fstab-generator[2355]: Ignoring "noauto" for root device
	[  +2.382851] kauditd_printk_skb: 47 callbacks suppressed
	[  +0.203299] systemd-fstab-generator[2910]: Ignoring "noauto" for root device
	[  +0.086101] systemd-fstab-generator[2921]: Ignoring "noauto" for root device
	[  +0.088805] systemd-fstab-generator[2932]: Ignoring "noauto" for root device
	[  +0.087344] systemd-fstab-generator[2946]: Ignoring "noauto" for root device
	[  +2.744444] systemd-fstab-generator[3098]: Ignoring "noauto" for root device
	[  +3.084150] systemd-fstab-generator[3633]: Ignoring "noauto" for root device
	[  +2.420232] systemd-fstab-generator[4283]: Ignoring "noauto" for root device
	[ +16.311649] kauditd_printk_skb: 68 callbacks suppressed
	[Oct 8 17:57] kauditd_printk_skb: 21 callbacks suppressed
	[Oct 8 18:00] systemd-fstab-generator[10823]: Ignoring "noauto" for root device
	[  +6.128976] systemd-fstab-generator[11443]: Ignoring "noauto" for root device
	[  +0.466418] systemd-fstab-generator[11573]: Ignoring "noauto" for root device
	
	
	==> etcd [9c9e0a8e03d8] <==
	{"level":"info","ts":"2024-10-08T18:00:50.005Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-10-08T18:00:50.005Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-10-08T18:00:50.011Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-10-08T18:00:50.011Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-10-08T18:00:50.011Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-08T18:00:50.011Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-10-08T18:00:50.011Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-10-08T18:00:50.298Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-10-08T18:00:50.299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-10-08T18:00:50.299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-10-08T18:00:50.299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-10-08T18:00:50.299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-08T18:00:50.299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-10-08T18:00:50.299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-10-08T18:00:50.299Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-967000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-10-08T18:00:50.299Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-08T18:00:50.300Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-08T18:00:50.300Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-08T18:00:50.300Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-10-08T18:00:50.301Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-10-08T18:00:50.301Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-10-08T18:00:50.301Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-10-08T18:00:50.301Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-08T18:00:50.302Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-10-08T18:00:50.302Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 18:05:12 up 9 min,  0 users,  load average: 0.04, 0.19, 0.15
	Linux running-upgrade-967000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [955e04ba9714] <==
	I1008 18:00:52.181655       1 cache.go:39] Caches are synced for autoregister controller
	I1008 18:00:52.231147       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1008 18:00:52.231181       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1008 18:00:52.231252       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1008 18:00:52.231592       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1008 18:00:52.248107       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1008 18:00:52.248112       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1008 18:00:52.921780       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1008 18:00:53.085065       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1008 18:00:53.087316       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1008 18:00:53.087336       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1008 18:00:53.214149       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1008 18:00:53.223851       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1008 18:00:53.272260       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W1008 18:00:53.274357       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I1008 18:00:53.274745       1 controller.go:611] quota admission added evaluator for: endpoints
	I1008 18:00:53.276020       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1008 18:00:54.217081       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1008 18:00:54.818326       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1008 18:00:54.823105       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I1008 18:00:54.845817       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1008 18:00:54.893278       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1008 18:01:07.795290       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I1008 18:01:08.281836       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1008 18:01:08.295873       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3b6861fabe36] <==
	I1008 18:01:07.359599       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I1008 18:01:07.361716       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1008 18:01:07.364907       1 shared_informer.go:262] Caches are synced for stateful set
	I1008 18:01:07.367061       1 shared_informer.go:262] Caches are synced for expand
	I1008 18:01:07.369247       1 shared_informer.go:262] Caches are synced for attach detach
	I1008 18:01:07.393730       1 shared_informer.go:262] Caches are synced for endpoint
	I1008 18:01:07.393965       1 shared_informer.go:262] Caches are synced for TTL
	I1008 18:01:07.393989       1 shared_informer.go:262] Caches are synced for service account
	I1008 18:01:07.394545       1 shared_informer.go:262] Caches are synced for PVC protection
	I1008 18:01:07.394561       1 shared_informer.go:262] Caches are synced for ClusterRoleAggregator
	I1008 18:01:07.394873       1 shared_informer.go:262] Caches are synced for HPA
	I1008 18:01:07.393993       1 shared_informer.go:262] Caches are synced for job
	I1008 18:01:07.396842       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1008 18:01:07.444925       1 shared_informer.go:262] Caches are synced for disruption
	I1008 18:01:07.444996       1 disruption.go:371] Sending events to api server.
	I1008 18:01:07.445049       1 shared_informer.go:262] Caches are synced for deployment
	I1008 18:01:07.555279       1 shared_informer.go:262] Caches are synced for resource quota
	I1008 18:01:07.596657       1 shared_informer.go:262] Caches are synced for resource quota
	I1008 18:01:07.798374       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-mkbfk"
	I1008 18:01:08.008155       1 shared_informer.go:262] Caches are synced for garbage collector
	I1008 18:01:08.072439       1 shared_informer.go:262] Caches are synced for garbage collector
	I1008 18:01:08.072457       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1008 18:01:08.298119       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I1008 18:01:08.396371       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-kbmml"
	I1008 18:01:08.400781       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-b7tcm"
	
	
	==> kube-proxy [4f4f43a5241a] <==
	I1008 18:01:08.270692       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I1008 18:01:08.270716       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I1008 18:01:08.270726       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1008 18:01:08.279783       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1008 18:01:08.279794       1 server_others.go:206] "Using iptables Proxier"
	I1008 18:01:08.279817       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1008 18:01:08.279951       1 server.go:661] "Version info" version="v1.24.1"
	I1008 18:01:08.279980       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1008 18:01:08.280250       1 config.go:317] "Starting service config controller"
	I1008 18:01:08.280265       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1008 18:01:08.280273       1 config.go:226] "Starting endpoint slice config controller"
	I1008 18:01:08.280274       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1008 18:01:08.280527       1 config.go:444] "Starting node config controller"
	I1008 18:01:08.280551       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1008 18:01:08.381357       1 shared_informer.go:262] Caches are synced for node config
	I1008 18:01:08.381369       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1008 18:01:08.381379       1 shared_informer.go:262] Caches are synced for service config
	
	
	==> kube-scheduler [62e3a70ad0ac] <==
	W1008 18:00:52.157843       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1008 18:00:52.157868       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1008 18:00:52.157904       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1008 18:00:52.157921       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1008 18:00:52.158044       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1008 18:00:52.158062       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1008 18:00:52.158112       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1008 18:00:52.158250       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1008 18:00:52.158335       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1008 18:00:52.158345       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1008 18:00:52.158889       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1008 18:00:52.158898       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1008 18:00:52.159261       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1008 18:00:52.159271       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1008 18:00:53.024177       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1008 18:00:53.024207       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1008 18:00:53.033691       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1008 18:00:53.033708       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1008 18:00:53.096611       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1008 18:00:53.096754       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1008 18:00:53.114335       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1008 18:00:53.114417       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1008 18:00:53.132016       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1008 18:00:53.132093       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1008 18:00:53.555838       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-10-08 17:55:50 UTC, ends at Tue 2024-10-08 18:05:12 UTC. --
	Oct 08 18:00:55 running-upgrade-967000 kubelet[11449]: I1008 18:00:55.849733   11449 apiserver.go:52] "Watching apiserver"
	Oct 08 18:00:56 running-upgrade-967000 kubelet[11449]: I1008 18:00:56.276859   11449 reconciler.go:157] "Reconciler: start to sync state"
	Oct 08 18:00:56 running-upgrade-967000 kubelet[11449]: E1008 18:00:56.453112   11449 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-apiserver-running-upgrade-967000\" already exists" pod="kube-system/kube-apiserver-running-upgrade-967000"
	Oct 08 18:00:56 running-upgrade-967000 kubelet[11449]: E1008 18:00:56.658767   11449 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-967000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-967000"
	Oct 08 18:00:56 running-upgrade-967000 kubelet[11449]: E1008 18:00:56.851473   11449 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-967000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-967000"
	Oct 08 18:00:57 running-upgrade-967000 kubelet[11449]: I1008 18:00:57.048488   11449 request.go:601] Waited for 1.107064274s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods
	Oct 08 18:00:57 running-upgrade-967000 kubelet[11449]: E1008 18:00:57.051650   11449 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-967000\" already exists" pod="kube-system/etcd-running-upgrade-967000"
	Oct 08 18:01:07 running-upgrade-967000 kubelet[11449]: I1008 18:01:07.323795   11449 topology_manager.go:200] "Topology Admit Handler"
	Oct 08 18:01:07 running-upgrade-967000 kubelet[11449]: I1008 18:01:07.390088   11449 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 08 18:01:07 running-upgrade-967000 kubelet[11449]: I1008 18:01:07.390098   11449 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/00211eca-dd35-40a9-acac-7afc5fa81f96-tmp\") pod \"storage-provisioner\" (UID: \"00211eca-dd35-40a9-acac-7afc5fa81f96\") " pod="kube-system/storage-provisioner"
	Oct 08 18:01:07 running-upgrade-967000 kubelet[11449]: I1008 18:01:07.390250   11449 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4bg5\" (UniqueName: \"kubernetes.io/projected/00211eca-dd35-40a9-acac-7afc5fa81f96-kube-api-access-g4bg5\") pod \"storage-provisioner\" (UID: \"00211eca-dd35-40a9-acac-7afc5fa81f96\") " pod="kube-system/storage-provisioner"
	Oct 08 18:01:07 running-upgrade-967000 kubelet[11449]: I1008 18:01:07.390433   11449 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 08 18:01:07 running-upgrade-967000 kubelet[11449]: I1008 18:01:07.804232   11449 topology_manager.go:200] "Topology Admit Handler"
	Oct 08 18:01:07 running-upgrade-967000 kubelet[11449]: I1008 18:01:07.893545   11449 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb0497cf-198d-4414-91c5-1154f2cc0418-xtables-lock\") pod \"kube-proxy-mkbfk\" (UID: \"bb0497cf-198d-4414-91c5-1154f2cc0418\") " pod="kube-system/kube-proxy-mkbfk"
	Oct 08 18:01:07 running-upgrade-967000 kubelet[11449]: I1008 18:01:07.893574   11449 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb0497cf-198d-4414-91c5-1154f2cc0418-lib-modules\") pod \"kube-proxy-mkbfk\" (UID: \"bb0497cf-198d-4414-91c5-1154f2cc0418\") " pod="kube-system/kube-proxy-mkbfk"
	Oct 08 18:01:07 running-upgrade-967000 kubelet[11449]: I1008 18:01:07.893585   11449 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnmch\" (UniqueName: \"kubernetes.io/projected/bb0497cf-198d-4414-91c5-1154f2cc0418-kube-api-access-rnmch\") pod \"kube-proxy-mkbfk\" (UID: \"bb0497cf-198d-4414-91c5-1154f2cc0418\") " pod="kube-system/kube-proxy-mkbfk"
	Oct 08 18:01:07 running-upgrade-967000 kubelet[11449]: I1008 18:01:07.893596   11449 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/bb0497cf-198d-4414-91c5-1154f2cc0418-kube-proxy\") pod \"kube-proxy-mkbfk\" (UID: \"bb0497cf-198d-4414-91c5-1154f2cc0418\") " pod="kube-system/kube-proxy-mkbfk"
	Oct 08 18:01:08 running-upgrade-967000 kubelet[11449]: I1008 18:01:08.398563   11449 topology_manager.go:200] "Topology Admit Handler"
	Oct 08 18:01:08 running-upgrade-967000 kubelet[11449]: I1008 18:01:08.404448   11449 topology_manager.go:200] "Topology Admit Handler"
	Oct 08 18:01:08 running-upgrade-967000 kubelet[11449]: I1008 18:01:08.498179   11449 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caf084be-7d1e-47f6-b0c4-010942598483-config-volume\") pod \"coredns-6d4b75cb6d-b7tcm\" (UID: \"caf084be-7d1e-47f6-b0c4-010942598483\") " pod="kube-system/coredns-6d4b75cb6d-b7tcm"
	Oct 08 18:01:08 running-upgrade-967000 kubelet[11449]: I1008 18:01:08.498289   11449 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x6vt\" (UniqueName: \"kubernetes.io/projected/cc1d52c3-6873-4308-a0bf-c00359bad390-kube-api-access-5x6vt\") pod \"coredns-6d4b75cb6d-kbmml\" (UID: \"cc1d52c3-6873-4308-a0bf-c00359bad390\") " pod="kube-system/coredns-6d4b75cb6d-kbmml"
	Oct 08 18:01:08 running-upgrade-967000 kubelet[11449]: I1008 18:01:08.498343   11449 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc1d52c3-6873-4308-a0bf-c00359bad390-config-volume\") pod \"coredns-6d4b75cb6d-kbmml\" (UID: \"cc1d52c3-6873-4308-a0bf-c00359bad390\") " pod="kube-system/coredns-6d4b75cb6d-kbmml"
	Oct 08 18:01:08 running-upgrade-967000 kubelet[11449]: I1008 18:01:08.498364   11449 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h458v\" (UniqueName: \"kubernetes.io/projected/caf084be-7d1e-47f6-b0c4-010942598483-kube-api-access-h458v\") pod \"coredns-6d4b75cb6d-b7tcm\" (UID: \"caf084be-7d1e-47f6-b0c4-010942598483\") " pod="kube-system/coredns-6d4b75cb6d-b7tcm"
	Oct 08 18:04:57 running-upgrade-967000 kubelet[11449]: I1008 18:04:57.056708   11449 scope.go:110] "RemoveContainer" containerID="0743e3bf710a267e92ab25865e257fcb39d077fc9b79f2dd0223a1ddf253bcc4"
	Oct 08 18:04:57 running-upgrade-967000 kubelet[11449]: I1008 18:04:57.068599   11449 scope.go:110] "RemoveContainer" containerID="bb6761a3d1f5032b919d6965ec7d17cc336d304a801e9bfc9c97023ddd33ecc8"
	
	
	==> storage-provisioner [93de4b90a2ba] <==
	I1008 18:01:07.824147       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1008 18:01:07.827727       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1008 18:01:07.827748       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1008 18:01:07.830733       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1008 18:01:07.830850       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"722f197e-1a3c-41bd-bb3b-fcaa27927646", APIVersion:"v1", ResourceVersion:"320", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-967000_3ffc4c85-669e-407d-a9da-36627532c311 became leader
	I1008 18:01:07.830870       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-967000_3ffc4c85-669e-407d-a9da-36627532c311!
	I1008 18:01:07.932919       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-967000_3ffc4c85-669e-407d-a9da-36627532c311!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-967000 -n running-upgrade-967000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-967000 -n running-upgrade-967000: exit status 2 (15.78049375s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-967000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-967000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-967000
--- FAIL: TestRunningBinaryUpgrade (642.10s)

                                                
                                    
x
+
TestKubernetesUpgrade (17.45s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-143000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-143000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (10.085067917s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-143000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-143000" primary control-plane node in "kubernetes-upgrade-143000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-143000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:54:28.871800    8442 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:54:28.871989    8442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:54:28.871993    8442 out.go:358] Setting ErrFile to fd 2...
	I1008 10:54:28.871995    8442 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:54:28.872127    8442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:54:28.873309    8442 out.go:352] Setting JSON to false
	I1008 10:54:28.891059    8442 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5038,"bootTime":1728405030,"procs":566,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:54:28.891150    8442 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:54:28.896150    8442 out.go:177] * [kubernetes-upgrade-143000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:54:28.903206    8442 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:54:28.903256    8442 notify.go:220] Checking for updates...
	I1008 10:54:28.909176    8442 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:54:28.912163    8442 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:54:28.915148    8442 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:54:28.918132    8442 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:54:28.921044    8442 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:54:28.924382    8442 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:54:28.924445    8442 config.go:182] Loaded profile config "offline-docker-841000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:54:28.924502    8442 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:54:28.929112    8442 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 10:54:28.936108    8442 start.go:297] selected driver: qemu2
	I1008 10:54:28.936114    8442 start.go:901] validating driver "qemu2" against <nil>
	I1008 10:54:28.936126    8442 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:54:28.938561    8442 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 10:54:28.941242    8442 out.go:177] * Automatically selected the socket_vmnet network
	I1008 10:54:28.944251    8442 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 10:54:28.944269    8442 cni.go:84] Creating CNI manager for ""
	I1008 10:54:28.944296    8442 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1008 10:54:28.944323    8442 start.go:340] cluster config:
	{Name:kubernetes-upgrade-143000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:54:28.948928    8442 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:54:28.956939    8442 out.go:177] * Starting "kubernetes-upgrade-143000" primary control-plane node in "kubernetes-upgrade-143000" cluster
	I1008 10:54:28.961067    8442 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1008 10:54:28.961082    8442 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1008 10:54:28.961091    8442 cache.go:56] Caching tarball of preloaded images
	I1008 10:54:28.961178    8442 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 10:54:28.961184    8442 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1008 10:54:28.961247    8442 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/kubernetes-upgrade-143000/config.json ...
	I1008 10:54:28.961258    8442 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/kubernetes-upgrade-143000/config.json: {Name:mk543fe3c37741d0c466a11339fd7f991145f88b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:54:28.961551    8442 start.go:360] acquireMachinesLock for kubernetes-upgrade-143000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:54:29.034937    8442 start.go:364] duration metric: took 73.376166ms to acquireMachinesLock for "kubernetes-upgrade-143000"
	I1008 10:54:29.034982    8442 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 10:54:29.035042    8442 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 10:54:29.043255    8442 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 10:54:29.070922    8442 start.go:159] libmachine.API.Create for "kubernetes-upgrade-143000" (driver="qemu2")
	I1008 10:54:29.070957    8442 client.go:168] LocalClient.Create starting
	I1008 10:54:29.071068    8442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 10:54:29.071127    8442 main.go:141] libmachine: Decoding PEM data...
	I1008 10:54:29.071142    8442 main.go:141] libmachine: Parsing certificate...
	I1008 10:54:29.071207    8442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 10:54:29.071252    8442 main.go:141] libmachine: Decoding PEM data...
	I1008 10:54:29.071265    8442 main.go:141] libmachine: Parsing certificate...
	I1008 10:54:29.071840    8442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 10:54:29.232987    8442 main.go:141] libmachine: Creating SSH key...
	I1008 10:54:29.401516    8442 main.go:141] libmachine: Creating Disk image...
	I1008 10:54:29.401523    8442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 10:54:29.401761    8442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/disk.qcow2
	I1008 10:54:29.412285    8442 main.go:141] libmachine: STDOUT: 
	I1008 10:54:29.412307    8442 main.go:141] libmachine: STDERR: 
	I1008 10:54:29.412366    8442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/disk.qcow2 +20000M
	I1008 10:54:29.420808    8442 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 10:54:29.420822    8442 main.go:141] libmachine: STDERR: 
	I1008 10:54:29.420834    8442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/disk.qcow2
	I1008 10:54:29.420840    8442 main.go:141] libmachine: Starting QEMU VM...
	I1008 10:54:29.420852    8442 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:54:29.420891    8442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1e:ec:a1:4d:a6:8e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/disk.qcow2
	I1008 10:54:29.422746    8442 main.go:141] libmachine: STDOUT: 
	I1008 10:54:29.422766    8442 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:54:29.422784    8442 client.go:171] duration metric: took 351.820709ms to LocalClient.Create
	I1008 10:54:31.424998    8442 start.go:128] duration metric: took 2.389923417s to createHost
	I1008 10:54:31.425098    8442 start.go:83] releasing machines lock for "kubernetes-upgrade-143000", held for 2.39014075s
	W1008 10:54:31.425246    8442 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:54:31.439314    8442 out.go:177] * Deleting "kubernetes-upgrade-143000" in qemu2 ...
	W1008 10:54:31.468841    8442 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:54:31.468873    8442 start.go:729] Will try again in 5 seconds ...
	I1008 10:54:36.471017    8442 start.go:360] acquireMachinesLock for kubernetes-upgrade-143000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:54:36.486511    8442 start.go:364] duration metric: took 15.37325ms to acquireMachinesLock for "kubernetes-upgrade-143000"
	I1008 10:54:36.486670    8442 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 10:54:36.486919    8442 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 10:54:36.495393    8442 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 10:54:36.544069    8442 start.go:159] libmachine.API.Create for "kubernetes-upgrade-143000" (driver="qemu2")
	I1008 10:54:36.544119    8442 client.go:168] LocalClient.Create starting
	I1008 10:54:36.544246    8442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 10:54:36.544301    8442 main.go:141] libmachine: Decoding PEM data...
	I1008 10:54:36.544318    8442 main.go:141] libmachine: Parsing certificate...
	I1008 10:54:36.544378    8442 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 10:54:36.544422    8442 main.go:141] libmachine: Decoding PEM data...
	I1008 10:54:36.544434    8442 main.go:141] libmachine: Parsing certificate...
	I1008 10:54:36.544998    8442 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 10:54:36.739565    8442 main.go:141] libmachine: Creating SSH key...
	I1008 10:54:36.863609    8442 main.go:141] libmachine: Creating Disk image...
	I1008 10:54:36.863615    8442 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 10:54:36.863820    8442 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/disk.qcow2
	I1008 10:54:36.874239    8442 main.go:141] libmachine: STDOUT: 
	I1008 10:54:36.874263    8442 main.go:141] libmachine: STDERR: 
	I1008 10:54:36.874322    8442 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/disk.qcow2 +20000M
	I1008 10:54:36.882863    8442 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 10:54:36.882885    8442 main.go:141] libmachine: STDERR: 
	I1008 10:54:36.882894    8442 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/disk.qcow2
	I1008 10:54:36.882900    8442 main.go:141] libmachine: Starting QEMU VM...
	I1008 10:54:36.882906    8442 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:54:36.882942    8442 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:69:e5:00:65:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/disk.qcow2
	I1008 10:54:36.884810    8442 main.go:141] libmachine: STDOUT: 
	I1008 10:54:36.884825    8442 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:54:36.884839    8442 client.go:171] duration metric: took 340.716417ms to LocalClient.Create
	I1008 10:54:38.885446    8442 start.go:128] duration metric: took 2.398504417s to createHost
	I1008 10:54:38.885499    8442 start.go:83] releasing machines lock for "kubernetes-upgrade-143000", held for 2.398969042s
	W1008 10:54:38.885858    8442 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-143000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-143000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:54:38.895763    8442 out.go:201] 
	W1008 10:54:38.901790    8442 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:54:38.901822    8442 out.go:270] * 
	* 
	W1008 10:54:38.904753    8442 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:54:38.911711    8442 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-143000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-143000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-143000: (1.895054542s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-143000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-143000 status --format={{.Host}}: exit status 7 (70.315084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-143000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-143000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.217884958s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-143000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-143000" primary control-plane node in "kubernetes-upgrade-143000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-143000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-143000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:54:40.927805    8477 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:54:40.927964    8477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:54:40.927967    8477 out.go:358] Setting ErrFile to fd 2...
	I1008 10:54:40.927970    8477 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:54:40.928102    8477 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:54:40.929171    8477 out.go:352] Setting JSON to false
	I1008 10:54:40.947388    8477 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5050,"bootTime":1728405030,"procs":564,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:54:40.947455    8477 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:54:40.952365    8477 out.go:177] * [kubernetes-upgrade-143000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:54:40.959277    8477 notify.go:220] Checking for updates...
	I1008 10:54:40.963312    8477 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:54:40.971304    8477 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:54:40.979327    8477 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:54:40.987247    8477 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:54:40.995198    8477 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:54:41.003261    8477 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:54:41.007559    8477 config.go:182] Loaded profile config "kubernetes-upgrade-143000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1008 10:54:41.007857    8477 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:54:41.012339    8477 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 10:54:41.019217    8477 start.go:297] selected driver: qemu2
	I1008 10:54:41.019224    8477 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-143000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:54:41.019278    8477 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:54:41.021941    8477 cni.go:84] Creating CNI manager for ""
	I1008 10:54:41.021978    8477 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 10:54:41.022015    8477 start.go:340] cluster config:
	{Name:kubernetes-upgrade-143000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubernetes-upgrade-143000 Namespace:
default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:54:41.026642    8477 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:54:41.034316    8477 out.go:177] * Starting "kubernetes-upgrade-143000" primary control-plane node in "kubernetes-upgrade-143000" cluster
	I1008 10:54:41.037313    8477 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 10:54:41.037329    8477 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 10:54:41.037337    8477 cache.go:56] Caching tarball of preloaded images
	I1008 10:54:41.037405    8477 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 10:54:41.037411    8477 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 10:54:41.037476    8477 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/kubernetes-upgrade-143000/config.json ...
	I1008 10:54:41.037771    8477 start.go:360] acquireMachinesLock for kubernetes-upgrade-143000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:54:41.037818    8477 start.go:364] duration metric: took 41µs to acquireMachinesLock for "kubernetes-upgrade-143000"
	I1008 10:54:41.037826    8477 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:54:41.037830    8477 fix.go:54] fixHost starting: 
	I1008 10:54:41.037946    8477 fix.go:112] recreateIfNeeded on kubernetes-upgrade-143000: state=Stopped err=<nil>
	W1008 10:54:41.037956    8477 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:54:41.042261    8477 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-143000" ...
	I1008 10:54:41.049162    8477 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:54:41.049204    8477 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:69:e5:00:65:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/disk.qcow2
	I1008 10:54:41.051290    8477 main.go:141] libmachine: STDOUT: 
	I1008 10:54:41.051309    8477 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:54:41.051338    8477 fix.go:56] duration metric: took 13.506166ms for fixHost
	I1008 10:54:41.051350    8477 start.go:83] releasing machines lock for "kubernetes-upgrade-143000", held for 13.521125ms
	W1008 10:54:41.051358    8477 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:54:41.051393    8477 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:54:41.051397    8477 start.go:729] Will try again in 5 seconds ...
	I1008 10:54:46.052667    8477 start.go:360] acquireMachinesLock for kubernetes-upgrade-143000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:54:46.053074    8477 start.go:364] duration metric: took 334.959µs to acquireMachinesLock for "kubernetes-upgrade-143000"
	I1008 10:54:46.053196    8477 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:54:46.053219    8477 fix.go:54] fixHost starting: 
	I1008 10:54:46.053942    8477 fix.go:112] recreateIfNeeded on kubernetes-upgrade-143000: state=Stopped err=<nil>
	W1008 10:54:46.053969    8477 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:54:46.059553    8477 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-143000" ...
	I1008 10:54:46.065127    8477 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:54:46.065304    8477 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:69:e5:00:65:48 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubernetes-upgrade-143000/disk.qcow2
	I1008 10:54:46.075900    8477 main.go:141] libmachine: STDOUT: 
	I1008 10:54:46.075961    8477 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 10:54:46.076039    8477 fix.go:56] duration metric: took 22.825875ms for fixHost
	I1008 10:54:46.076091    8477 start.go:83] releasing machines lock for "kubernetes-upgrade-143000", held for 22.960834ms
	W1008 10:54:46.076324    8477 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-143000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-143000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 10:54:46.084507    8477 out.go:201] 
	W1008 10:54:46.087659    8477 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 10:54:46.087699    8477 out.go:270] * 
	* 
	W1008 10:54:46.090247    8477 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:54:46.099437    8477 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-143000 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-143000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-143000 version --output=json: exit status 1 (64.652375ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-143000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:629: *** TestKubernetesUpgrade FAILED at 2024-10-08 10:54:46.179049 -0700 PDT m=+756.745406668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-143000 -n kubernetes-upgrade-143000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-143000 -n kubernetes-upgrade-143000: exit status 7 (38.187167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-143000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-143000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-143000
--- FAIL: TestKubernetesUpgrade (17.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (608.51s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.215629238 start -p stopped-upgrade-810000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.215629238 start -p stopped-upgrade-810000 --memory=2200 --vm-driver=qemu2 : (1m14.699011542s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.215629238 -p stopped-upgrade-810000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/minikube-v1.26.0.215629238 -p stopped-upgrade-810000 stop: (12.097970208s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-810000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-810000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m41.621680875s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-810000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-810000" primary control-plane node in "stopped-upgrade-810000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-810000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:56:08.244104    8523 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:56:08.244743    8523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:56:08.244749    8523 out.go:358] Setting ErrFile to fd 2...
	I1008 10:56:08.244752    8523 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:56:08.244906    8523 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:56:08.246214    8523 out.go:352] Setting JSON to false
	I1008 10:56:08.265804    8523 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5138,"bootTime":1728405030,"procs":568,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:56:08.265882    8523 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:56:08.270199    8523 out.go:177] * [stopped-upgrade-810000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:56:08.278546    8523 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:56:08.278697    8523 notify.go:220] Checking for updates...
	I1008 10:56:08.286108    8523 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:56:08.289105    8523 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:56:08.292066    8523 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:56:08.295055    8523 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:56:08.298090    8523 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:56:08.301326    8523 config.go:182] Loaded profile config "stopped-upgrade-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1008 10:56:08.305113    8523 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1008 10:56:08.308662    8523 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:56:08.313066    8523 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 10:56:08.320136    8523 start.go:297] selected driver: qemu2
	I1008 10:56:08.320165    8523 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51227 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1008 10:56:08.320226    8523 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:56:08.323346    8523 cni.go:84] Creating CNI manager for ""
	I1008 10:56:08.323389    8523 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 10:56:08.323553    8523 start.go:340] cluster config:
	{Name:stopped-upgrade-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51227 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1008 10:56:08.323756    8523 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:56:08.332055    8523 out.go:177] * Starting "stopped-upgrade-810000" primary control-plane node in "stopped-upgrade-810000" cluster
	I1008 10:56:08.335004    8523 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1008 10:56:08.335019    8523 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I1008 10:56:08.335036    8523 cache.go:56] Caching tarball of preloaded images
	I1008 10:56:08.335122    8523 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 10:56:08.335127    8523 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I1008 10:56:08.335189    8523 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/config.json ...
	I1008 10:56:08.335624    8523 start.go:360] acquireMachinesLock for stopped-upgrade-810000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 10:56:08.335670    8523 start.go:364] duration metric: took 39.625µs to acquireMachinesLock for "stopped-upgrade-810000"
	I1008 10:56:08.335682    8523 start.go:96] Skipping create...Using existing machine configuration
	I1008 10:56:08.335686    8523 fix.go:54] fixHost starting: 
	I1008 10:56:08.335788    8523 fix.go:112] recreateIfNeeded on stopped-upgrade-810000: state=Stopped err=<nil>
	W1008 10:56:08.335796    8523 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 10:56:08.342023    8523 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-810000" ...
	I1008 10:56:08.346231    8523 qemu.go:418] Using hvf for hardware acceleration
	I1008 10:56:08.346324    8523 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/9.1.0/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/qemu.pid -nic user,model=virtio,hostfwd=tcp::51195-:22,hostfwd=tcp::51196-:2376,hostname=stopped-upgrade-810000 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/disk.qcow2
	I1008 10:56:08.392236    8523 main.go:141] libmachine: STDOUT: 
	I1008 10:56:08.392262    8523 main.go:141] libmachine: STDERR: 
	I1008 10:56:08.392270    8523 main.go:141] libmachine: Waiting for VM to start (ssh -p 51195 docker@127.0.0.1)...
	I1008 10:56:27.427719    8523 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/config.json ...
	I1008 10:56:27.427970    8523 machine.go:93] provisionDockerMachine start ...
	I1008 10:56:27.428024    8523 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:27.428215    8523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fe2480] 0x102fe4cc0 <nil>  [] 0s} localhost 51195 <nil> <nil>}
	I1008 10:56:27.428221    8523 main.go:141] libmachine: About to run SSH command:
	hostname
	I1008 10:56:27.478482    8523 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1008 10:56:27.478531    8523 buildroot.go:166] provisioning hostname "stopped-upgrade-810000"
	I1008 10:56:27.478602    8523 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:27.478719    8523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fe2480] 0x102fe4cc0 <nil>  [] 0s} localhost 51195 <nil> <nil>}
	I1008 10:56:27.478728    8523 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-810000 && echo "stopped-upgrade-810000" | sudo tee /etc/hostname
	I1008 10:56:27.534221    8523 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-810000
	
	I1008 10:56:27.534287    8523 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:27.534391    8523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fe2480] 0x102fe4cc0 <nil>  [] 0s} localhost 51195 <nil> <nil>}
	I1008 10:56:27.534400    8523 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-810000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-810000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-810000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1008 10:56:27.588499    8523 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1008 10:56:27.588517    8523 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19774-6384/.minikube CaCertPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19774-6384/.minikube}
	I1008 10:56:27.588528    8523 buildroot.go:174] setting up certificates
	I1008 10:56:27.588533    8523 provision.go:84] configureAuth start
	I1008 10:56:27.588557    8523 provision.go:143] copyHostCerts
	I1008 10:56:27.588646    8523 exec_runner.go:144] found /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.pem, removing ...
	I1008 10:56:27.589446    8523 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.pem
	I1008 10:56:27.589567    8523 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.pem (1078 bytes)
	I1008 10:56:27.589732    8523 exec_runner.go:144] found /Users/jenkins/minikube-integration/19774-6384/.minikube/cert.pem, removing ...
	I1008 10:56:27.589736    8523 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19774-6384/.minikube/cert.pem
	I1008 10:56:27.589792    8523 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19774-6384/.minikube/cert.pem (1123 bytes)
	I1008 10:56:27.589899    8523 exec_runner.go:144] found /Users/jenkins/minikube-integration/19774-6384/.minikube/key.pem, removing ...
	I1008 10:56:27.589902    8523 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19774-6384/.minikube/key.pem
	I1008 10:56:27.589954    8523 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19774-6384/.minikube/key.pem (1679 bytes)
	I1008 10:56:27.590049    8523 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-810000 san=[127.0.0.1 localhost minikube stopped-upgrade-810000]
	I1008 10:56:27.659656    8523 provision.go:177] copyRemoteCerts
	I1008 10:56:27.659955    8523 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1008 10:56:27.659964    8523 sshutil.go:53] new ssh client: &{IP:localhost Port:51195 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/id_rsa Username:docker}
	I1008 10:56:27.687882    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1008 10:56:27.694638    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1008 10:56:27.701414    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1008 10:56:27.708869    8523 provision.go:87] duration metric: took 120.327333ms to configureAuth
	I1008 10:56:27.708880    8523 buildroot.go:189] setting minikube options for container-runtime
	I1008 10:56:27.708988    8523 config.go:182] Loaded profile config "stopped-upgrade-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1008 10:56:27.709046    8523 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:27.709140    8523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fe2480] 0x102fe4cc0 <nil>  [] 0s} localhost 51195 <nil> <nil>}
	I1008 10:56:27.709146    8523 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1008 10:56:27.758595    8523 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1008 10:56:27.758607    8523 buildroot.go:70] root file system type: tmpfs
	I1008 10:56:27.758684    8523 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1008 10:56:27.758750    8523 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:27.758866    8523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fe2480] 0x102fe4cc0 <nil>  [] 0s} localhost 51195 <nil> <nil>}
	I1008 10:56:27.758901    8523 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1008 10:56:27.814871    8523 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1008 10:56:27.814945    8523 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:27.815060    8523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fe2480] 0x102fe4cc0 <nil>  [] 0s} localhost 51195 <nil> <nil>}
	I1008 10:56:27.815071    8523 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1008 10:56:28.192106    8523 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1008 10:56:28.192120    8523 machine.go:96] duration metric: took 764.14225ms to provisionDockerMachine
	I1008 10:56:28.192127    8523 start.go:293] postStartSetup for "stopped-upgrade-810000" (driver="qemu2")
	I1008 10:56:28.192133    8523 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1008 10:56:28.192209    8523 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1008 10:56:28.192220    8523 sshutil.go:53] new ssh client: &{IP:localhost Port:51195 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/id_rsa Username:docker}
	I1008 10:56:28.220095    8523 ssh_runner.go:195] Run: cat /etc/os-release
	I1008 10:56:28.221517    8523 info.go:137] Remote host: Buildroot 2021.02.12
	I1008 10:56:28.221528    8523 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19774-6384/.minikube/addons for local assets ...
	I1008 10:56:28.221609    8523 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19774-6384/.minikube/files for local assets ...
	I1008 10:56:28.221754    8523 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem -> 69072.pem in /etc/ssl/certs
	I1008 10:56:28.221904    8523 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1008 10:56:28.225089    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem --> /etc/ssl/certs/69072.pem (1708 bytes)
	I1008 10:56:28.232589    8523 start.go:296] duration metric: took 40.457041ms for postStartSetup
	I1008 10:56:28.232604    8523 fix.go:56] duration metric: took 19.896968042s for fixHost
	I1008 10:56:28.232650    8523 main.go:141] libmachine: Using SSH client type: native
	I1008 10:56:28.232750    8523 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102fe2480] 0x102fe4cc0 <nil>  [] 0s} localhost 51195 <nil> <nil>}
	I1008 10:56:28.232755    8523 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1008 10:56:28.286815    8523 main.go:141] libmachine: SSH cmd err, output: <nil>: 1728410188.121122629
	
	I1008 10:56:28.286824    8523 fix.go:216] guest clock: 1728410188.121122629
	I1008 10:56:28.286829    8523 fix.go:229] Guest: 2024-10-08 10:56:28.121122629 -0700 PDT Remote: 2024-10-08 10:56:28.232607 -0700 PDT m=+20.099956751 (delta=-111.484371ms)
	I1008 10:56:28.286839    8523 fix.go:200] guest clock delta is within tolerance: -111.484371ms
	I1008 10:56:28.286844    8523 start.go:83] releasing machines lock for "stopped-upgrade-810000", held for 19.951219375s
	I1008 10:56:28.286937    8523 ssh_runner.go:195] Run: cat /version.json
	I1008 10:56:28.286945    8523 sshutil.go:53] new ssh client: &{IP:localhost Port:51195 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/id_rsa Username:docker}
	I1008 10:56:28.286950    8523 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1008 10:56:28.287206    8523 sshutil.go:53] new ssh client: &{IP:localhost Port:51195 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/id_rsa Username:docker}
	W1008 10:56:28.363024    8523 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1008 10:56:28.363086    8523 ssh_runner.go:195] Run: systemctl --version
	I1008 10:56:28.365078    8523 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1008 10:56:28.366929    8523 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1008 10:56:28.366977    8523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1008 10:56:28.370058    8523 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1008 10:56:28.375092    8523 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1008 10:56:28.375102    8523 start.go:495] detecting cgroup driver to use...
	I1008 10:56:28.375222    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 10:56:28.381699    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I1008 10:56:28.385161    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1008 10:56:28.388793    8523 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1008 10:56:28.388827    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1008 10:56:28.392359    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 10:56:28.396059    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1008 10:56:28.399095    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1008 10:56:28.402030    8523 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1008 10:56:28.405351    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1008 10:56:28.408734    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1008 10:56:28.412330    8523 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1008 10:56:28.415283    8523 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1008 10:56:28.417965    8523 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1008 10:56:28.421248    8523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:28.500121    8523 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1008 10:56:28.505807    8523 start.go:495] detecting cgroup driver to use...
	I1008 10:56:28.505904    8523 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1008 10:56:28.513475    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 10:56:28.518838    8523 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1008 10:56:28.526613    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1008 10:56:28.531587    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1008 10:56:28.536390    8523 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1008 10:56:28.573411    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1008 10:56:28.578750    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1008 10:56:28.584141    8523 ssh_runner.go:195] Run: which cri-dockerd
	I1008 10:56:28.585377    8523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1008 10:56:28.588500    8523 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1008 10:56:28.593426    8523 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1008 10:56:28.684808    8523 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1008 10:56:28.766039    8523 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I1008 10:56:28.766103    8523 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1008 10:56:28.771916    8523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:28.857843    8523 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1008 10:56:29.990973    8523 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.133107417s)
	I1008 10:56:29.991109    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1008 10:56:29.996984    8523 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1008 10:56:30.004414    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1008 10:56:30.009664    8523 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1008 10:56:30.091409    8523 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1008 10:56:30.170005    8523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:30.248152    8523 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1008 10:56:30.254428    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1008 10:56:30.259685    8523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:30.340108    8523 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1008 10:56:30.380451    8523 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1008 10:56:30.380554    8523 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1008 10:56:30.382663    8523 start.go:563] Will wait 60s for crictl version
	I1008 10:56:30.382727    8523 ssh_runner.go:195] Run: which crictl
	I1008 10:56:30.384266    8523 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1008 10:56:30.399673    8523 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I1008 10:56:30.399753    8523 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1008 10:56:30.418880    8523 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1008 10:56:30.439828    8523 out.go:235] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I1008 10:56:30.439916    8523 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I1008 10:56:30.441200    8523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 10:56:30.445673    8523 kubeadm.go:883] updating cluster {Name:stopped-upgrade-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51227 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I1008 10:56:30.445722    8523 preload.go:131] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I1008 10:56:30.445774    8523 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1008 10:56:30.457053    8523 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1008 10:56:30.457061    8523 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1008 10:56:30.457117    8523 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1008 10:56:30.460990    8523 ssh_runner.go:195] Run: which lz4
	I1008 10:56:30.462346    8523 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1008 10:56:30.463790    8523 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1008 10:56:30.463800    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I1008 10:56:31.412723    8523 docker.go:649] duration metric: took 950.4175ms to copy over tarball
	I1008 10:56:31.412800    8523 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1008 10:56:32.615930    8523 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.203114791s)
	I1008 10:56:32.615947    8523 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1008 10:56:32.632263    8523 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1008 10:56:32.635817    8523 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I1008 10:56:32.641578    8523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:32.714233    8523 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1008 10:56:34.565123    8523 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.850877667s)
	I1008 10:56:34.565239    8523 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1008 10:56:34.587126    8523 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1008 10:56:34.587137    8523 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I1008 10:56:34.587157    8523 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1008 10:56:34.594483    8523 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:34.595048    8523 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:34.596938    8523 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:34.598676    8523 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:34.598904    8523 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:34.599247    8523 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:34.601399    8523 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:34.601508    8523 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1008 10:56:34.601444    8523 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:34.603357    8523 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:34.604448    8523 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:34.604467    8523 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1008 10:56:34.605608    8523 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:34.605627    8523 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:34.606792    8523 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:34.607970    8523 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:35.141027    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:35.149873    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:35.155703    8523 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I1008 10:56:35.156150    8523 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:35.156215    8523 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I1008 10:56:35.165881    8523 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I1008 10:56:35.165923    8523 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:35.165995    8523 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I1008 10:56:35.179936    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I1008 10:56:35.180794    8523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1008 10:56:35.184316    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I1008 10:56:35.185481    8523 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I1008 10:56:35.185512    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	W1008 10:56:35.215035    8523 image.go:283] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I1008 10:56:35.215240    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:35.231823    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:35.253935    8523 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I1008 10:56:35.253999    8523 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:35.254064    8523 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1008 10:56:35.282358    8523 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I1008 10:56:35.282387    8523 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:35.282452    8523 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I1008 10:56:35.291809    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I1008 10:56:35.291973    8523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1008 10:56:35.298821    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1008 10:56:35.333080    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I1008 10:56:35.333189    8523 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I1008 10:56:35.333233    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I1008 10:56:35.359450    8523 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I1008 10:56:35.359487    8523 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I1008 10:56:35.359556    8523 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I1008 10:56:35.410739    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I1008 10:56:35.410898    8523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1008 10:56:35.434910    8523 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I1008 10:56:35.434939    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I1008 10:56:35.439577    8523 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1008 10:56:35.439591    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I1008 10:56:35.493037    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:35.545733    8523 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1008 10:56:35.545756    8523 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I1008 10:56:35.545761    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I1008 10:56:35.545769    8523 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I1008 10:56:35.545788    8523 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:35.545847    8523 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I1008 10:56:35.564161    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I1008 10:56:35.567875    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:35.578557    8523 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I1008 10:56:35.578579    8523 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1008 10:56:35.578595    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I1008 10:56:35.579445    8523 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I1008 10:56:35.579464    8523 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:35.579527    8523 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I1008 10:56:35.720061    8523 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1008 10:56:35.720098    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	W1008 10:56:36.020898    8523 image.go:283] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1008 10:56:36.021019    8523 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:36.033649    8523 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1008 10:56:36.033673    8523 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:36.033749    8523 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 10:56:36.050069    8523 cache_images.go:289] Loading image from: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1008 10:56:36.050221    8523 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1008 10:56:36.051707    8523 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1008 10:56:36.051719    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1008 10:56:36.084119    8523 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1008 10:56:36.084132    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1008 10:56:36.322868    8523 cache_images.go:321] Transferred and loaded /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1008 10:56:36.322914    8523 cache_images.go:92] duration metric: took 1.735751583s to LoadCachedImages
	W1008 10:56:36.323129    8523 out.go:270] X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I1008 10:56:36.323255    8523 kubeadm.go:934] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I1008 10:56:36.323320    8523 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-810000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1008 10:56:36.323390    8523 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1008 10:56:36.341628    8523 cni.go:84] Creating CNI manager for ""
	I1008 10:56:36.341644    8523 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 10:56:36.341651    8523 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1008 10:56:36.341667    8523 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-810000 NodeName:stopped-upgrade-810000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1008 10:56:36.341733    8523 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-810000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1008 10:56:36.341796    8523 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I1008 10:56:36.344993    8523 binaries.go:44] Found k8s binaries, skipping transfer
	I1008 10:56:36.345032    8523 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1008 10:56:36.347873    8523 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1008 10:56:36.352785    8523 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1008 10:56:36.358358    8523 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I1008 10:56:36.364395    8523 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I1008 10:56:36.365845    8523 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1008 10:56:36.370051    8523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 10:56:36.459508    8523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 10:56:36.467459    8523 certs.go:68] Setting up /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000 for IP: 10.0.2.15
	I1008 10:56:36.467472    8523 certs.go:194] generating shared ca certs ...
	I1008 10:56:36.467482    8523 certs.go:226] acquiring lock for ca certs: {Name:mkb70c9691d78e2ecd0076f3f0607577e8eefb22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:36.467750    8523 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.key
	I1008 10:56:36.467792    8523 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/proxy-client-ca.key
	I1008 10:56:36.467960    8523 certs.go:256] generating profile certs ...
	I1008 10:56:36.468089    8523 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/client.key
	I1008 10:56:36.468105    8523 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.key.93a04a39
	I1008 10:56:36.468265    8523 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.crt.93a04a39 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I1008 10:56:36.525454    8523 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.crt.93a04a39 ...
	I1008 10:56:36.525479    8523 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.crt.93a04a39: {Name:mk811a22ffd011f3d85e0fb59b6e1f5c93ef2a14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:36.525769    8523 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.key.93a04a39 ...
	I1008 10:56:36.525774    8523 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.key.93a04a39: {Name:mk1ff52479bc6a11b5837c24228770ede08bc28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:36.525975    8523 certs.go:381] copying /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.crt.93a04a39 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.crt
	I1008 10:56:36.526113    8523 certs.go:385] copying /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.key.93a04a39 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.key
	I1008 10:56:36.526286    8523 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/proxy-client.key
	I1008 10:56:36.526423    8523 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/6907.pem (1338 bytes)
	W1008 10:56:36.526447    8523 certs.go:480] ignoring /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/6907_empty.pem, impossibly tiny 0 bytes
	I1008 10:56:36.526453    8523 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca-key.pem (1679 bytes)
	I1008 10:56:36.526474    8523 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem (1078 bytes)
	I1008 10:56:36.526492    8523 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem (1123 bytes)
	I1008 10:56:36.526509    8523 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/key.pem (1679 bytes)
	I1008 10:56:36.526560    8523 certs.go:484] found cert: /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem (1708 bytes)
	I1008 10:56:36.527897    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1008 10:56:36.535729    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1008 10:56:36.550696    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1008 10:56:36.558383    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1008 10:56:36.565482    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1008 10:56:36.576011    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1008 10:56:36.584505    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1008 10:56:36.592625    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1008 10:56:36.600281    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/6907.pem --> /usr/share/ca-certificates/6907.pem (1338 bytes)
	I1008 10:56:36.607236    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/ssl/certs/69072.pem --> /usr/share/ca-certificates/69072.pem (1708 bytes)
	I1008 10:56:36.614361    8523 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1008 10:56:36.622594    8523 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1008 10:56:36.630522    8523 ssh_runner.go:195] Run: openssl version
	I1008 10:56:36.632497    8523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6907.pem && ln -fs /usr/share/ca-certificates/6907.pem /etc/ssl/certs/6907.pem"
	I1008 10:56:36.635624    8523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6907.pem
	I1008 10:56:36.637097    8523 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  8 17:43 /usr/share/ca-certificates/6907.pem
	I1008 10:56:36.637136    8523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6907.pem
	I1008 10:56:36.638810    8523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/6907.pem /etc/ssl/certs/51391683.0"
	I1008 10:56:36.641900    8523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/69072.pem && ln -fs /usr/share/ca-certificates/69072.pem /etc/ssl/certs/69072.pem"
	I1008 10:56:36.645431    8523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/69072.pem
	I1008 10:56:36.646842    8523 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  8 17:43 /usr/share/ca-certificates/69072.pem
	I1008 10:56:36.646876    8523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/69072.pem
	I1008 10:56:36.648496    8523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/69072.pem /etc/ssl/certs/3ec20f2e.0"
	I1008 10:56:36.651786    8523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1008 10:56:36.655071    8523 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1008 10:56:36.656622    8523 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  8 17:55 /usr/share/ca-certificates/minikubeCA.pem
	I1008 10:56:36.656664    8523 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1008 10:56:36.659102    8523 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1008 10:56:36.662411    8523 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1008 10:56:36.664026    8523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1008 10:56:36.666141    8523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1008 10:56:36.668409    8523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1008 10:56:36.670496    8523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1008 10:56:36.672467    8523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1008 10:56:36.674190    8523 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1008 10:56:36.676045    8523 kubeadm.go:392] StartCluster: {Name:stopped-upgrade-810000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:51227 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-810000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I1008 10:56:36.676108    8523 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1008 10:56:36.686216    8523 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1008 10:56:36.690066    8523 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1008 10:56:36.690075    8523 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1008 10:56:36.690115    8523 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1008 10:56:36.693719    8523 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1008 10:56:36.693947    8523 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-810000" does not appear in /Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:56:36.693967    8523 kubeconfig.go:62] /Users/jenkins/minikube-integration/19774-6384/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-810000" cluster setting kubeconfig missing "stopped-upgrade-810000" context setting]
	I1008 10:56:36.694142    8523 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/kubeconfig: {Name:mk301b17dd40bdbbbe99e75bcafc6142cf217159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:56:36.695379    8523 kapi.go:59] client config for stopped-upgrade-810000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/client.key", CAFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104a380f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 10:56:36.701213    8523 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1008 10:56:36.705083    8523 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-810000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I1008 10:56:36.705091    8523 kubeadm.go:1160] stopping kube-system containers ...
	I1008 10:56:36.705139    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1008 10:56:36.716101    8523 docker.go:483] Stopping containers: [56f80cdf5031 5f436e794069 838f048371b1 723b63a1a7b2 b60901c3d729 e61fade57ee9 b706c818d35c fbfef5a53508]
	I1008 10:56:36.716187    8523 ssh_runner.go:195] Run: docker stop 56f80cdf5031 5f436e794069 838f048371b1 723b63a1a7b2 b60901c3d729 e61fade57ee9 b706c818d35c fbfef5a53508
	I1008 10:56:36.726640    8523 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1008 10:56:36.732196    8523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 10:56:36.735623    8523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 10:56:36.735634    8523 kubeadm.go:157] found existing configuration files:
	
	I1008 10:56:36.735704    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/admin.conf
	I1008 10:56:36.738560    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 10:56:36.738596    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 10:56:36.741638    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/kubelet.conf
	I1008 10:56:36.744941    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 10:56:36.744983    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 10:56:36.748365    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/controller-manager.conf
	I1008 10:56:36.751114    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 10:56:36.751154    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 10:56:36.753907    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/scheduler.conf
	I1008 10:56:36.756956    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 10:56:36.756992    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 10:56:36.759927    8523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 10:56:36.762709    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:36.787917    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:37.379812    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:37.527301    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:37.557569    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1008 10:56:37.588894    8523 api_server.go:52] waiting for apiserver process to appear ...
	I1008 10:56:37.588978    8523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 10:56:38.091042    8523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 10:56:38.591031    8523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 10:56:38.596317    8523 api_server.go:72] duration metric: took 1.007425209s to wait for apiserver process to appear ...
	I1008 10:56:38.596330    8523 api_server.go:88] waiting for apiserver healthz status ...
	I1008 10:56:38.596340    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:56:43.599173    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:56:43.599207    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:56:48.600084    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:56:48.600171    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:56:53.601396    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:56:53.601498    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:56:58.602982    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:56:58.603036    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:03.604727    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:03.604816    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:08.606917    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:08.607021    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:13.609789    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:13.609868    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:18.611161    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:18.611188    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:23.613342    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:23.613381    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:28.615578    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:28.615606    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:33.617853    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:33.617899    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:38.620164    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:38.620717    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:57:38.644189    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:57:38.644311    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:57:38.660240    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:57:38.660336    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:57:38.673567    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:57:38.673668    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:57:38.684578    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:57:38.684660    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:57:38.694926    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:57:38.695001    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:57:38.705482    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:57:38.705547    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:57:38.715946    8523 logs.go:282] 0 containers: []
	W1008 10:57:38.715961    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:57:38.716039    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:57:38.726318    8523 logs.go:282] 1 containers: [5333aa2337bc]
	I1008 10:57:38.726335    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:57:38.726345    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:57:38.740246    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:57:38.740257    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:57:38.756050    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:57:38.756060    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:57:38.776799    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:57:38.776811    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:57:38.793384    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:57:38.793393    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:57:38.810695    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:57:38.810705    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:57:38.918825    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:57:38.918837    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:57:38.932069    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:57:38.932080    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:57:38.943487    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:57:38.943503    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:57:38.970006    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:57:38.970017    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:57:38.981869    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:57:38.981879    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:57:39.011351    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:57:39.011359    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:57:39.015279    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:57:39.015285    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:57:39.032997    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:57:39.033008    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:57:39.051255    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:57:39.051266    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:57:39.069313    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:57:39.069324    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:57:41.582691    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:46.585474    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:46.585705    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:57:46.598689    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:57:46.598784    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:57:46.609209    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:57:46.609288    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:57:46.619339    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:57:46.619415    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:57:46.632928    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:57:46.633007    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:57:46.643373    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:57:46.643453    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:57:46.653511    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:57:46.653599    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:57:46.663360    8523 logs.go:282] 0 containers: []
	W1008 10:57:46.663371    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:57:46.663431    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:57:46.674115    8523 logs.go:282] 1 containers: [5333aa2337bc]
	I1008 10:57:46.674143    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:57:46.674153    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:57:46.703571    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:57:46.703582    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:57:46.722612    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:57:46.722625    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:57:46.738325    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:57:46.738336    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:57:46.763886    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:57:46.763901    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:57:46.774988    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:57:46.775000    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:57:46.788981    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:57:46.788995    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:57:46.800029    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:57:46.800039    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:57:46.814459    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:57:46.814469    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:57:46.832890    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:57:46.832901    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:57:46.852580    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:57:46.852590    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:57:46.856692    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:57:46.856698    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:57:46.894458    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:57:46.894470    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:57:46.908855    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:57:46.908867    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:57:46.920937    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:57:46.920948    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:57:46.940538    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:57:46.940552    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:57:49.466879    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:57:54.469449    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:57:54.469661    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:57:54.485085    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:57:54.485177    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:57:54.496866    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:57:54.496946    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:57:54.507658    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:57:54.507744    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:57:54.518756    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:57:54.518839    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:57:54.529256    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:57:54.529336    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:57:54.539890    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:57:54.539964    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:57:54.550119    8523 logs.go:282] 0 containers: []
	W1008 10:57:54.550130    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:57:54.550198    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:57:54.560593    8523 logs.go:282] 1 containers: [5333aa2337bc]
	I1008 10:57:54.560617    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:57:54.560621    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:57:54.572075    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:57:54.572086    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:57:54.598106    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:57:54.598118    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:57:54.612391    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:57:54.612406    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:57:54.626867    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:57:54.626882    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:57:54.639125    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:57:54.639136    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:57:54.656417    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:57:54.656427    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:57:54.677909    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:57:54.677920    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:57:54.707290    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:57:54.707300    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:57:54.711466    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:57:54.711473    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:57:54.724745    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:57:54.724761    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:57:54.735829    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:57:54.735841    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:57:54.747412    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:57:54.747426    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:57:54.783642    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:57:54.783656    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:57:54.797423    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:57:54.797432    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:57:54.811455    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:57:54.811464    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:57:57.331630    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:02.334130    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:02.334407    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:02.360316    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:02.360457    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:02.377834    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:02.377948    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:02.391376    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:02.391462    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:02.402912    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:02.402990    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:02.415081    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:02.415157    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:02.425065    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:02.425132    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:02.436142    8523 logs.go:282] 0 containers: []
	W1008 10:58:02.436157    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:02.436225    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:02.446086    8523 logs.go:282] 1 containers: [5333aa2337bc]
	I1008 10:58:02.446113    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:02.446120    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:02.460940    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:02.460951    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:02.474870    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:02.474879    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:02.493839    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:02.493851    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:02.514930    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:02.514941    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:02.526921    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:02.526933    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:02.537884    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:02.537896    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:02.564215    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:02.564223    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:02.579415    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:02.579428    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:02.593871    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:02.593883    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:02.622717    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:02.622728    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:58:02.640674    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:02.640690    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:02.656018    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:02.656033    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:02.673936    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:02.673948    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:02.686200    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:02.686212    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:02.690532    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:02.690539    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:05.237869    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:10.240509    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:10.240766    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:10.260413    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:10.260519    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:10.274929    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:10.275027    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:10.287079    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:10.287162    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:10.297724    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:10.297804    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:10.308882    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:10.308961    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:10.319517    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:10.319594    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:10.329862    8523 logs.go:282] 0 containers: []
	W1008 10:58:10.329875    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:10.329942    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:10.340695    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:58:10.340714    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:10.340720    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:10.377618    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:10.377631    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:10.390536    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:10.390546    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:10.405792    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:10.405804    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:58:10.423068    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:10.423078    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:10.427266    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:10.427273    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:10.441429    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:10.441442    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:10.452753    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:58:10.452764    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:58:10.464624    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:10.464638    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:10.476406    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:10.476416    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:10.497515    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:10.497528    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:10.510008    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:10.510022    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:10.528115    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:10.528125    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:10.557120    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:10.557130    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:10.570896    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:10.570908    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:10.590823    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:10.590834    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:10.602369    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:10.602380    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:13.128238    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:18.130037    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:18.130269    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:18.152549    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:18.152644    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:18.165682    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:18.165767    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:18.183019    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:18.183087    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:18.193742    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:18.193825    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:18.204308    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:18.204393    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:18.214448    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:18.214525    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:18.225107    8523 logs.go:282] 0 containers: []
	W1008 10:58:18.225118    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:18.225183    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:18.236243    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:58:18.236264    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:18.236270    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:18.257654    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:18.257666    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:18.274532    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:58:18.274545    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:58:18.288623    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:18.288634    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:18.318185    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:18.318197    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:18.335622    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:18.335634    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:18.346691    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:18.346705    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:18.380895    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:18.380907    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:18.394101    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:18.394113    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:18.406651    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:18.406662    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:58:18.425514    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:18.425528    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:18.436700    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:18.436711    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:18.462673    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:18.462680    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:18.476682    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:18.476692    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:18.491315    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:18.491326    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:18.518823    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:18.518833    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:18.522915    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:18.522921    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:21.041365    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:26.043776    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:26.043960    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:26.056453    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:26.056542    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:26.067421    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:26.067496    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:26.077749    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:26.077827    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:26.090856    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:26.090935    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:26.101111    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:26.101185    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:26.112171    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:26.112255    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:26.122630    8523 logs.go:282] 0 containers: []
	W1008 10:58:26.122641    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:26.122705    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:26.132878    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:58:26.132896    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:26.132901    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:26.154392    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:58:26.154405    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:58:26.165538    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:26.165548    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:26.195181    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:26.195196    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:26.199767    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:26.199774    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:26.212268    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:26.212284    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:26.225160    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:26.225172    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:26.246653    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:26.246665    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:26.261136    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:26.261147    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:26.272780    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:26.272792    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:26.298293    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:26.298301    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:26.333097    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:26.333112    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:26.346448    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:26.346460    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:26.361382    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:26.361393    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:26.382539    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:26.382549    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:58:26.400831    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:26.400842    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:26.415556    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:26.415571    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:28.935597    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:33.937895    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:33.938507    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:33.980291    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:33.980441    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:34.000290    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:34.000401    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:34.014447    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:34.014537    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:34.026344    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:34.026422    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:34.043421    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:34.043529    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:34.054024    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:34.054101    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:34.064446    8523 logs.go:282] 0 containers: []
	W1008 10:58:34.064457    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:34.064522    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:34.074930    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:58:34.074946    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:34.074953    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:34.079420    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:34.079428    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:34.096495    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:34.096509    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:34.120545    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:34.120564    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:34.135622    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:34.135637    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:58:34.153288    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:34.153302    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:34.174311    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:58:34.174326    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:58:34.186532    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:34.186543    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:34.199305    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:34.199315    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:34.213287    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:34.213298    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:34.225280    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:34.225292    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:34.236487    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:34.236498    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:34.266046    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:34.266055    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:34.302780    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:34.302793    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:34.317316    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:34.317331    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:34.328971    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:34.328985    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:34.350338    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:34.350348    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:36.864200    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:41.866977    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:41.867275    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:41.894791    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:41.894930    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:41.912540    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:41.912642    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:41.925650    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:41.925732    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:41.939133    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:41.939214    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:41.949576    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:41.949657    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:41.961107    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:41.961189    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:41.971263    8523 logs.go:282] 0 containers: []
	W1008 10:58:41.971275    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:41.971341    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:41.981783    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:58:41.981801    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:41.981807    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:41.992785    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:41.992799    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:42.007707    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:42.007720    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:42.034166    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:42.034177    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:42.045880    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:42.045894    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:42.075021    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:42.075028    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:42.096494    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:42.096506    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:58:42.113881    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:42.113893    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:42.131149    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:42.131165    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:42.135182    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:42.135190    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:42.147703    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:42.147713    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:42.161752    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:42.161767    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:42.174068    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:42.174084    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:42.207698    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:42.207708    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:42.221979    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:42.221995    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:42.236869    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:42.236880    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:42.249677    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:58:42.249694    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:58:44.763711    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:49.764521    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:49.764764    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:49.790207    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:49.790359    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:49.806575    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:49.806673    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:49.820354    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:49.820440    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:49.832511    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:49.832589    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:49.842997    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:49.843070    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:49.853611    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:49.853685    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:49.864111    8523 logs.go:282] 0 containers: []
	W1008 10:58:49.864127    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:49.864192    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:49.875186    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:58:49.875204    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:49.875210    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:49.889554    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:49.889566    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:49.903935    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:49.903947    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:49.915516    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:49.915527    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:49.920565    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:49.920572    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:49.957551    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:49.957562    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:49.972167    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:49.972178    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:49.983810    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:49.983820    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:50.009811    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:50.009829    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:50.041107    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:50.041130    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:50.054054    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:58:50.054069    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:58:50.070723    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:50.070738    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:50.082655    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:50.082666    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:50.095525    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:50.095540    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:50.117232    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:50.117243    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:50.131382    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:50.131397    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:58:50.149218    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:50.149228    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:52.668904    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:58:57.670595    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:58:57.670753    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:58:57.687157    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:58:57.687249    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:58:57.700294    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:58:57.700382    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:58:57.711315    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:58:57.711386    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:58:57.722194    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:58:57.722278    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:58:57.733011    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:58:57.733093    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:58:57.744144    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:58:57.744220    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:58:57.754562    8523 logs.go:282] 0 containers: []
	W1008 10:58:57.754574    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:58:57.754646    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:58:57.767164    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:58:57.767183    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:58:57.767188    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:58:57.781198    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:58:57.781210    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:58:57.802324    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:58:57.802334    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:58:57.816883    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:58:57.816894    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:58:57.832211    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:58:57.832221    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:58:57.849545    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:58:57.849555    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:58:57.860541    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:58:57.860553    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:58:57.890078    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:58:57.890088    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:58:57.926842    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:58:57.926853    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:58:57.943972    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:58:57.943984    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:58:57.958949    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:58:57.958959    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:58:57.974124    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:58:57.974138    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:58:57.985534    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:58:57.985545    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:58:58.010993    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:58:58.011002    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:58:58.014922    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:58:58.014932    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:58:58.025947    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:58:58.025959    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:58:58.038330    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:58:58.038341    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:59:00.558097    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:05.560783    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:05.561107    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:05.589389    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:59:05.589531    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:05.605186    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:59:05.605279    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:05.618104    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:59:05.618176    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:05.628971    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:59:05.629050    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:05.639382    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:59:05.639464    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:05.649961    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:59:05.650035    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:05.663061    8523 logs.go:282] 0 containers: []
	W1008 10:59:05.663074    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:05.663149    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:05.673467    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:59:05.673484    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:59:05.673489    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:59:05.687442    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:59:05.687453    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:59:05.704569    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:05.704580    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:59:05.732930    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:59:05.732943    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:59:05.744104    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:59:05.744116    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:59:05.765025    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:59:05.765035    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:59:05.780130    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:59:05.780141    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:59:05.794715    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:59:05.794726    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:59:05.805981    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:59:05.805997    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:59:05.823041    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:59:05.823050    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:59:05.834934    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:05.834945    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:05.858630    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:59:05.858637    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:59:05.870638    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:59:05.870649    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:59:05.885045    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:59:05.885057    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:59:05.896815    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:05.896825    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:05.901581    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:05.901588    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:05.937473    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:59:05.937485    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:08.452611    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:13.455302    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:13.455577    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:13.480995    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:59:13.481106    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:13.495408    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:59:13.495498    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:13.508148    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:59:13.508234    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:13.518962    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:59:13.519046    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:13.529499    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:59:13.529581    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:13.539702    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:59:13.539780    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:13.550156    8523 logs.go:282] 0 containers: []
	W1008 10:59:13.550169    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:13.550240    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:13.560673    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:59:13.560692    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:13.560700    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:13.599396    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:59:13.599408    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:59:13.613342    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:59:13.613355    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:13.625354    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:59:13.625365    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:59:13.648565    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:59:13.648578    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:59:13.662208    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:59:13.662220    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:59:13.677170    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:59:13.677184    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:59:13.691999    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:59:13.692010    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:59:13.703388    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:13.703400    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:59:13.731109    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:59:13.731117    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:59:13.742392    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:59:13.742403    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:59:13.760173    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:13.760183    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:13.784528    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:59:13.784538    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:59:13.797934    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:13.797949    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:13.802058    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:59:13.802065    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:59:13.819135    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:59:13.819144    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:59:13.838020    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:59:13.838032    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:59:16.353320    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:21.355581    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:21.355720    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:21.368053    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:59:21.368144    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:21.382824    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:59:21.382916    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:21.394000    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:59:21.394083    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:21.404550    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:59:21.404634    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:21.417715    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:59:21.417793    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:21.428504    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:59:21.428589    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:21.442852    8523 logs.go:282] 0 containers: []
	W1008 10:59:21.442863    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:21.442937    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:21.453263    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:59:21.453280    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:59:21.453285    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:59:21.467102    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:59:21.467112    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:59:21.484805    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:59:21.484817    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:59:21.496583    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:59:21.496594    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:21.509421    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:21.509432    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:21.545620    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:59:21.545636    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:59:21.560609    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:59:21.560624    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:59:21.577939    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:59:21.577950    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:59:21.589336    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:59:21.589348    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:59:21.604234    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:59:21.604243    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:59:21.615385    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:59:21.615396    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:59:21.629177    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:59:21.629192    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:59:21.648248    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:59:21.648258    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:59:21.669391    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:59:21.669399    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:59:21.693900    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:21.693916    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:21.719233    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:21.719240    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:59:21.748344    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:21.748354    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:24.254638    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:29.257163    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:29.257696    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:29.292271    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:59:29.292420    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:29.312147    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:59:29.312261    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:29.327067    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:59:29.327155    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:29.341329    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:59:29.341411    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:29.354953    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:59:29.355032    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:29.365693    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:59:29.365762    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:29.376634    8523 logs.go:282] 0 containers: []
	W1008 10:59:29.376645    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:29.376705    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:29.388106    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:59:29.388126    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:59:29.388131    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:59:29.411230    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:59:29.411242    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:59:29.427611    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:59:29.427622    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:59:29.445007    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:59:29.445017    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:59:29.463463    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:59:29.463479    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:59:29.475594    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:59:29.475604    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:29.491595    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:29.491606    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:29.495895    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:29.495903    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:29.531259    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:29.531276    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:59:29.559669    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:59:29.559677    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:59:29.578810    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:29.578821    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:29.604305    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:59:29.604312    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:59:29.616834    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:59:29.616845    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:59:29.635290    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:59:29.635301    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:59:29.646839    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:59:29.646850    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:59:29.658297    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:59:29.658306    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:59:29.669819    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:59:29.669831    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:59:32.185817    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:37.188267    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:37.188721    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:37.223389    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:59:37.223545    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:37.243769    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:59:37.243870    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:37.258561    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:59:37.258649    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:37.270582    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:59:37.270698    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:37.281179    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:59:37.281257    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:37.292054    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:59:37.292132    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:37.302189    8523 logs.go:282] 0 containers: []
	W1008 10:59:37.302203    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:37.302271    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:37.312857    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:59:37.312875    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:59:37.312880    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:59:37.327606    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:59:37.327618    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:59:37.345549    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:37.345560    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:37.370345    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:59:37.370353    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:59:37.388096    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:59:37.388107    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:59:37.404518    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:59:37.404530    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:59:37.428940    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:59:37.428950    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:59:37.446224    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:59:37.446237    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:59:37.458713    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:59:37.458726    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:59:37.475604    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:59:37.475621    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:59:37.487669    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:37.487685    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:59:37.516759    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:37.516770    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:37.521492    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:37.521498    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:37.555279    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:59:37.555291    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:59:37.575444    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:59:37.575457    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:59:37.595429    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:59:37.595440    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:59:37.607041    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:59:37.607057    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:40.121259    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:45.123696    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:45.123905    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:45.137230    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:59:45.137325    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:45.148704    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:59:45.148789    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:45.160161    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:59:45.160238    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:45.172151    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:59:45.172233    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:45.183975    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:59:45.184057    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:45.195561    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:59:45.195647    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:45.206625    8523 logs.go:282] 0 containers: []
	W1008 10:59:45.206638    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:45.206712    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:45.218487    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:59:45.218507    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:59:45.218513    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:59:45.232101    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:59:45.232112    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:59:45.249639    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:45.249650    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:45.286805    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:59:45.286819    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:59:45.301866    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:59:45.301881    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:59:45.317625    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:59:45.317644    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:59:45.340326    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:59:45.340337    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:59:45.361622    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:45.361631    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:59:45.392558    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:45.392571    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:45.397706    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:45.397717    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:45.423607    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:59:45.423617    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:45.437715    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:59:45.437729    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:59:45.454163    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:59:45.454177    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:59:45.474099    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:59:45.474111    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:59:45.493045    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:59:45.493059    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:59:45.506568    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:59:45.506581    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:59:45.526337    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:59:45.526350    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:59:48.040046    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 10:59:53.042357    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 10:59:53.042550    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 10:59:53.057688    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 10:59:53.057791    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 10:59:53.068732    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 10:59:53.068812    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 10:59:53.079414    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 10:59:53.079497    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 10:59:53.089791    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 10:59:53.089883    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 10:59:53.099929    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 10:59:53.100012    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 10:59:53.110239    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 10:59:53.110315    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 10:59:53.120731    8523 logs.go:282] 0 containers: []
	W1008 10:59:53.120748    8523 logs.go:284] No container was found matching "kindnet"
	I1008 10:59:53.120813    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 10:59:53.130893    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 10:59:53.130910    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 10:59:53.130915    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 10:59:53.158386    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 10:59:53.158394    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 10:59:53.172929    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 10:59:53.172941    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 10:59:53.198529    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 10:59:53.198542    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 10:59:53.210569    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 10:59:53.210581    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 10:59:53.227731    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 10:59:53.227745    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 10:59:53.239122    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 10:59:53.239135    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 10:59:53.252598    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 10:59:53.252610    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 10:59:53.266161    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 10:59:53.266172    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 10:59:53.277275    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 10:59:53.277287    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 10:59:53.295102    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 10:59:53.295112    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 10:59:53.330869    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 10:59:53.330881    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 10:59:53.346481    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 10:59:53.346492    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 10:59:53.363287    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 10:59:53.363297    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 10:59:53.375313    8523 logs.go:123] Gathering logs for Docker ...
	I1008 10:59:53.375325    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 10:59:53.398826    8523 logs.go:123] Gathering logs for container status ...
	I1008 10:59:53.398834    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 10:59:53.410432    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 10:59:53.410441    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 10:59:55.916602    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:00.918838    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:00.918942    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:00.932415    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 11:00:00.932508    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:00.943773    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 11:00:00.943877    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:00.955345    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 11:00:00.955436    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:00.966084    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 11:00:00.966170    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:00.976850    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 11:00:00.976939    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:00.987746    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 11:00:00.987840    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:00.998134    8523 logs.go:282] 0 containers: []
	W1008 11:00:00.998150    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:00.998228    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:01.011926    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 11:00:01.011944    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 11:00:01.011950    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 11:00:01.023560    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:01.023572    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:01.058107    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 11:00:01.058120    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 11:00:01.072159    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 11:00:01.072172    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 11:00:01.086570    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 11:00:01.086582    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 11:00:01.107678    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 11:00:01.107689    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 11:00:01.120507    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 11:00:01.120521    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 11:00:01.146130    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 11:00:01.146141    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 11:00:01.163475    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 11:00:01.163487    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 11:00:01.175862    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 11:00:01.175875    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 11:00:01.190056    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:01.190068    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:01.194355    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 11:00:01.194362    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 11:00:01.208181    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 11:00:01.208192    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 11:00:01.219676    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:01.219688    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:01.244777    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:00:01.244788    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:01.258773    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:01.258785    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:00:01.288550    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 11:00:01.288560    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 11:00:03.800406    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:08.802897    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:08.803458    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:08.840340    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 11:00:08.840512    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:08.860780    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 11:00:08.860884    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:08.875365    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 11:00:08.875467    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:08.887530    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 11:00:08.887607    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:08.898229    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 11:00:08.898304    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:08.910288    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 11:00:08.910366    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:08.920442    8523 logs.go:282] 0 containers: []
	W1008 11:00:08.920454    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:08.920525    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:08.932206    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 11:00:08.932225    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:08.932233    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:08.978721    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 11:00:08.978733    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 11:00:08.993830    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 11:00:08.993841    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 11:00:09.015048    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 11:00:09.015059    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 11:00:09.027052    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:09.027064    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:09.051215    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:09.051223    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:09.055364    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 11:00:09.055372    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 11:00:09.068340    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 11:00:09.068351    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 11:00:09.089513    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 11:00:09.089523    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 11:00:09.106733    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:00:09.106743    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:09.118734    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:09.118745    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:00:09.147870    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 11:00:09.147881    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 11:00:09.162116    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 11:00:09.162127    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 11:00:09.176534    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 11:00:09.176547    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 11:00:09.192664    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 11:00:09.192678    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 11:00:09.204495    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 11:00:09.204509    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 11:00:09.221860    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 11:00:09.221873    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 11:00:11.734931    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:16.737268    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:16.737608    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:16.763124    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 11:00:16.763254    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:16.778679    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 11:00:16.778768    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:16.805175    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 11:00:16.805271    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:16.817971    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 11:00:16.818055    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:16.829581    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 11:00:16.829667    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:16.843241    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 11:00:16.843327    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:16.853413    8523 logs.go:282] 0 containers: []
	W1008 11:00:16.853433    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:16.853496    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:16.863624    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 11:00:16.863643    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 11:00:16.863649    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 11:00:16.877385    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 11:00:16.877396    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 11:00:16.888250    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 11:00:16.888262    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 11:00:16.901013    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:16.901024    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:16.937533    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 11:00:16.937546    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 11:00:16.949584    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 11:00:16.949595    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 11:00:16.965018    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 11:00:16.965030    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 11:00:16.979501    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:16.979512    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:17.004104    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:00:17.004112    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:17.015888    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:17.015899    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:00:17.045609    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 11:00:17.045618    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 11:00:17.059856    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 11:00:17.059866    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 11:00:17.072921    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 11:00:17.072931    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 11:00:17.093559    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 11:00:17.093569    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 11:00:17.110529    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 11:00:17.110539    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 11:00:17.130091    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 11:00:17.130102    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 11:00:17.141859    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:17.141871    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:19.647904    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:24.648728    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:24.649390    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:24.687018    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 11:00:24.687177    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:24.712397    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 11:00:24.712529    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:24.727766    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 11:00:24.727856    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:24.740364    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 11:00:24.740444    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:24.751673    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 11:00:24.751749    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:24.762753    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 11:00:24.762840    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:24.773375    8523 logs.go:282] 0 containers: []
	W1008 11:00:24.773388    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:24.773455    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:24.784400    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 11:00:24.784420    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 11:00:24.784425    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 11:00:24.795808    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 11:00:24.795819    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 11:00:24.817189    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 11:00:24.817200    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 11:00:24.832393    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 11:00:24.832405    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 11:00:24.849043    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 11:00:24.849055    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 11:00:24.863328    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 11:00:24.863338    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 11:00:24.876803    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:00:24.876819    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:24.889503    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:24.889514    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:00:24.918194    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:24.918203    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:24.956654    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 11:00:24.956666    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 11:00:24.968434    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 11:00:24.968443    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 11:00:24.980493    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:24.980505    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:24.984990    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 11:00:24.985000    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 11:00:25.000427    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 11:00:25.000438    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 11:00:25.025952    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:25.025963    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:25.048275    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 11:00:25.048283    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 11:00:25.062466    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 11:00:25.062480    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 11:00:27.582823    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:32.585474    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:32.585675    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:00:32.602503    8523 logs.go:282] 2 containers: [4940e0f91298 1d27ee4283f5]
	I1008 11:00:32.602608    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:00:32.616502    8523 logs.go:282] 2 containers: [654eb0939bf8 56f80cdf5031]
	I1008 11:00:32.616581    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:00:32.627745    8523 logs.go:282] 1 containers: [6aa5eea53544]
	I1008 11:00:32.627829    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:00:32.638339    8523 logs.go:282] 2 containers: [7560b63a2dfd 723b63a1a7b2]
	I1008 11:00:32.638421    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:00:32.649039    8523 logs.go:282] 1 containers: [29f9d3569422]
	I1008 11:00:32.649110    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:00:32.660091    8523 logs.go:282] 2 containers: [020ea0375367 99fda85dc6b0]
	I1008 11:00:32.660173    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:00:32.670518    8523 logs.go:282] 0 containers: []
	W1008 11:00:32.670530    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:00:32.670601    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:00:32.681032    8523 logs.go:282] 2 containers: [5ab6318527a2 5333aa2337bc]
	I1008 11:00:32.681051    8523 logs.go:123] Gathering logs for kube-apiserver [1d27ee4283f5] ...
	I1008 11:00:32.681056    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1d27ee4283f5"
	I1008 11:00:32.694043    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:00:32.694059    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:00:32.698513    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:00:32.698520    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:00:32.737412    8523 logs.go:123] Gathering logs for coredns [6aa5eea53544] ...
	I1008 11:00:32.737426    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6aa5eea53544"
	I1008 11:00:32.749059    8523 logs.go:123] Gathering logs for kube-scheduler [7560b63a2dfd] ...
	I1008 11:00:32.749070    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7560b63a2dfd"
	I1008 11:00:32.770049    8523 logs.go:123] Gathering logs for kube-scheduler [723b63a1a7b2] ...
	I1008 11:00:32.770058    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 723b63a1a7b2"
	I1008 11:00:32.784744    8523 logs.go:123] Gathering logs for kube-controller-manager [020ea0375367] ...
	I1008 11:00:32.784758    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 020ea0375367"
	I1008 11:00:32.802817    8523 logs.go:123] Gathering logs for kube-apiserver [4940e0f91298] ...
	I1008 11:00:32.802828    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4940e0f91298"
	I1008 11:00:32.816793    8523 logs.go:123] Gathering logs for etcd [654eb0939bf8] ...
	I1008 11:00:32.816806    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 654eb0939bf8"
	I1008 11:00:32.832290    8523 logs.go:123] Gathering logs for etcd [56f80cdf5031] ...
	I1008 11:00:32.832303    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 56f80cdf5031"
	I1008 11:00:32.846837    8523 logs.go:123] Gathering logs for kube-proxy [29f9d3569422] ...
	I1008 11:00:32.846850    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29f9d3569422"
	I1008 11:00:32.858425    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:00:32.858435    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:00:32.882442    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:00:32.882454    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:00:32.895061    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:00:32.895077    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:00:32.925645    8523 logs.go:123] Gathering logs for kube-controller-manager [99fda85dc6b0] ...
	I1008 11:00:32.925657    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 99fda85dc6b0"
	I1008 11:00:32.944792    8523 logs.go:123] Gathering logs for storage-provisioner [5ab6318527a2] ...
	I1008 11:00:32.944804    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5ab6318527a2"
	I1008 11:00:32.958021    8523 logs.go:123] Gathering logs for storage-provisioner [5333aa2337bc] ...
	I1008 11:00:32.958031    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5333aa2337bc"
	I1008 11:00:35.472555    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:40.474768    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:40.474853    8523 kubeadm.go:597] duration metric: took 4m3.78538525s to restartPrimaryControlPlane
	W1008 11:00:40.474931    8523 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1008 11:00:40.474964    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1008 11:00:41.506384    8523 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.031410208s)
	I1008 11:00:41.506465    8523 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1008 11:00:41.511885    8523 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1008 11:00:41.514988    8523 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1008 11:00:41.517611    8523 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1008 11:00:41.517619    8523 kubeadm.go:157] found existing configuration files:
	
	I1008 11:00:41.517653    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/admin.conf
	I1008 11:00:41.520114    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1008 11:00:41.520148    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1008 11:00:41.523103    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/kubelet.conf
	I1008 11:00:41.526006    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1008 11:00:41.526037    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1008 11:00:41.528554    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/controller-manager.conf
	I1008 11:00:41.531529    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1008 11:00:41.531565    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1008 11:00:41.534836    8523 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/scheduler.conf
	I1008 11:00:41.537593    8523 kubeadm.go:163] "https://control-plane.minikube.internal:51227" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:51227 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1008 11:00:41.537624    8523 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1008 11:00:41.540220    8523 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1008 11:00:41.558684    8523 kubeadm.go:310] [init] Using Kubernetes version: v1.24.1
	I1008 11:00:41.558734    8523 kubeadm.go:310] [preflight] Running pre-flight checks
	I1008 11:00:41.608780    8523 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1008 11:00:41.608854    8523 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1008 11:00:41.608909    8523 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1008 11:00:41.661555    8523 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1008 11:00:41.665744    8523 out.go:235]   - Generating certificates and keys ...
	I1008 11:00:41.665778    8523 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1008 11:00:41.665807    8523 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1008 11:00:41.665852    8523 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1008 11:00:41.665887    8523 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1008 11:00:41.665921    8523 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1008 11:00:41.665957    8523 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1008 11:00:41.665993    8523 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1008 11:00:41.666028    8523 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1008 11:00:41.666068    8523 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1008 11:00:41.666107    8523 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1008 11:00:41.666127    8523 kubeadm.go:310] [certs] Using the existing "sa" key
	I1008 11:00:41.666164    8523 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1008 11:00:41.940328    8523 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1008 11:00:42.069304    8523 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1008 11:00:42.324133    8523 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1008 11:00:42.385795    8523 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1008 11:00:42.414169    8523 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1008 11:00:42.414512    8523 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1008 11:00:42.414534    8523 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1008 11:00:42.504203    8523 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1008 11:00:42.508402    8523 out.go:235]   - Booting up control plane ...
	I1008 11:00:42.508533    8523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1008 11:00:42.508654    8523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1008 11:00:42.508703    8523 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1008 11:00:42.509315    8523 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1008 11:00:42.510174    8523 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1008 11:00:47.512976    8523 kubeadm.go:310] [apiclient] All control plane components are healthy after 5.002445 seconds
	I1008 11:00:47.513032    8523 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1008 11:00:47.517592    8523 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1008 11:00:48.024928    8523 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1008 11:00:48.025029    8523 kubeadm.go:310] [mark-control-plane] Marking the node stopped-upgrade-810000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1008 11:00:48.528953    8523 kubeadm.go:310] [bootstrap-token] Using token: y0p1cj.lce64642rwb74wr7
	I1008 11:00:48.533101    8523 out.go:235]   - Configuring RBAC rules ...
	I1008 11:00:48.533163    8523 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1008 11:00:48.533215    8523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1008 11:00:48.535027    8523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1008 11:00:48.540842    8523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1008 11:00:48.541923    8523 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1008 11:00:48.543062    8523 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1008 11:00:48.547042    8523 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1008 11:00:48.729561    8523 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1008 11:00:48.934751    8523 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1008 11:00:48.935380    8523 kubeadm.go:310] 
	I1008 11:00:48.935482    8523 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1008 11:00:48.935499    8523 kubeadm.go:310] 
	I1008 11:00:48.935622    8523 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1008 11:00:48.935631    8523 kubeadm.go:310] 
	I1008 11:00:48.935668    8523 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1008 11:00:48.935760    8523 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1008 11:00:48.935830    8523 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1008 11:00:48.935843    8523 kubeadm.go:310] 
	I1008 11:00:48.935918    8523 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1008 11:00:48.935930    8523 kubeadm.go:310] 
	I1008 11:00:48.935992    8523 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1008 11:00:48.936002    8523 kubeadm.go:310] 
	I1008 11:00:48.936088    8523 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1008 11:00:48.936129    8523 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1008 11:00:48.936177    8523 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1008 11:00:48.936185    8523 kubeadm.go:310] 
	I1008 11:00:48.936229    8523 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1008 11:00:48.936270    8523 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1008 11:00:48.936278    8523 kubeadm.go:310] 
	I1008 11:00:48.936333    8523 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token y0p1cj.lce64642rwb74wr7 \
	I1008 11:00:48.936407    8523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62e893a61543438a55113fac81ed4f49345f71ff8f12e8a170334491d7def86a \
	I1008 11:00:48.936421    8523 kubeadm.go:310] 	--control-plane 
	I1008 11:00:48.936423    8523 kubeadm.go:310] 
	I1008 11:00:48.936465    8523 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1008 11:00:48.936470    8523 kubeadm.go:310] 
	I1008 11:00:48.936517    8523 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token y0p1cj.lce64642rwb74wr7 \
	I1008 11:00:48.936602    8523 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:62e893a61543438a55113fac81ed4f49345f71ff8f12e8a170334491d7def86a 
	I1008 11:00:48.936777    8523 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1008 11:00:48.936788    8523 cni.go:84] Creating CNI manager for ""
	I1008 11:00:48.936797    8523 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:00:48.940971    8523 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1008 11:00:48.948854    8523 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1008 11:00:48.952185    8523 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1008 11:00:48.958569    8523 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1008 11:00:48.958648    8523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1008 11:00:48.958842    8523 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-810000 minikube.k8s.io/updated_at=2024_10_08T11_00_48_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=2c4e2f69eea7599bce474e3f41d0dff85410149e minikube.k8s.io/name=stopped-upgrade-810000 minikube.k8s.io/primary=true
	I1008 11:00:48.988824    8523 kubeadm.go:1113] duration metric: took 30.237708ms to wait for elevateKubeSystemPrivileges
	I1008 11:00:48.999481    8523 ops.go:34] apiserver oom_adj: -16
	I1008 11:00:48.999492    8523 kubeadm.go:394] duration metric: took 4m12.32408375s to StartCluster
	I1008 11:00:48.999504    8523 settings.go:142] acquiring lock: {Name:mk8a824673b36585a3cfee48bd81254259b5c84b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:00:48.999690    8523 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:00:49.001124    8523 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/kubeconfig: {Name:mk301b17dd40bdbbbe99e75bcafc6142cf217159 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:00:49.001446    8523 config.go:182] Loaded profile config "stopped-upgrade-810000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I1008 11:00:49.001513    8523 start.go:235] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:00:49.001807    8523 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1008 11:00:49.001983    8523 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-810000"
	I1008 11:00:49.001986    8523 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-810000"
	I1008 11:00:49.001991    8523 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-810000"
	W1008 11:00:49.001994    8523 addons.go:243] addon storage-provisioner should already be in state true
	I1008 11:00:49.001994    8523 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-810000"
	I1008 11:00:49.002003    8523 host.go:66] Checking if "stopped-upgrade-810000" exists ...
	I1008 11:00:49.006026    8523 out.go:177] * Verifying Kubernetes components...
	I1008 11:00:49.006695    8523 kapi.go:59] client config for stopped-upgrade-810000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/stopped-upgrade-810000/client.key", CAFile:"/Users/jenkins/minikube-integration/19774-6384/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]ui
nt8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x104a380f0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1008 11:00:49.010372    8523 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-810000"
	W1008 11:00:49.010382    8523 addons.go:243] addon default-storageclass should already be in state true
	I1008 11:00:49.010400    8523 host.go:66] Checking if "stopped-upgrade-810000" exists ...
	I1008 11:00:49.011232    8523 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1008 11:00:49.011238    8523 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1008 11:00:49.011243    8523 sshutil.go:53] new ssh client: &{IP:localhost Port:51195 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/id_rsa Username:docker}
	I1008 11:00:49.012944    8523 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1008 11:00:49.016973    8523 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1008 11:00:49.021081    8523 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 11:00:49.021090    8523 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1008 11:00:49.021096    8523 sshutil.go:53] new ssh client: &{IP:localhost Port:51195 SSHKeyPath:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/stopped-upgrade-810000/id_rsa Username:docker}
	I1008 11:00:49.106963    8523 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1008 11:00:49.113265    8523 api_server.go:52] waiting for apiserver process to appear ...
	I1008 11:00:49.113323    8523 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1008 11:00:49.117394    8523 api_server.go:72] duration metric: took 115.870084ms to wait for apiserver process to appear ...
	I1008 11:00:49.117402    8523 api_server.go:88] waiting for apiserver healthz status ...
	I1008 11:00:49.117409    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:49.136958    8523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1008 11:00:49.199000    8523 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1008 11:00:49.523268    8523 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1008 11:00:49.523281    8523 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1008 11:00:54.118554    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:54.118586    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:00:59.119511    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:00:59.119534    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:04.119736    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:04.119767    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:09.120042    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:09.120066    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:14.120458    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:14.120484    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:19.120984    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:19.121010    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W1008 11:01:19.526216    8523 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I1008 11:01:19.529135    8523 out.go:177] * Enabled addons: storage-provisioner
	I1008 11:01:19.541180    8523 addons.go:510] duration metric: took 30.539736959s for enable addons: enabled=[storage-provisioner]
	I1008 11:01:24.121718    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:24.121759    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:29.122569    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:29.122599    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:34.123635    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:34.123676    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:39.125095    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:39.125119    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:44.126734    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:44.126772    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:49.128858    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:49.128982    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:01:49.140015    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:01:49.140094    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:01:49.150971    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:01:49.151046    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:01:49.161720    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:01:49.161801    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:01:49.172876    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:01:49.172945    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:01:49.183709    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:01:49.183776    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:01:49.194823    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:01:49.194899    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:01:49.206414    8523 logs.go:282] 0 containers: []
	W1008 11:01:49.206429    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:01:49.206494    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:01:49.218771    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:01:49.218786    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:01:49.218792    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:01:49.236763    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:01:49.236774    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:01:49.247989    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:01:49.248002    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:01:49.259772    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:01:49.259786    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:01:49.264104    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:01:49.264110    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:01:49.279427    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:01:49.279438    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:01:49.293941    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:01:49.293952    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:01:49.304999    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:01:49.305008    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:01:49.324693    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:01:49.324704    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:01:49.348575    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:01:49.348587    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:01:49.382505    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:01:49.382513    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:01:49.420962    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:01:49.420974    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:01:49.433480    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:01:49.433494    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:01:51.950962    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:01:56.952035    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:01:56.952196    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:01:56.963748    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:01:56.963832    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:01:56.974593    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:01:56.974673    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:01:56.985820    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:01:56.985897    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:01:56.996767    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:01:56.996846    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:01:57.007321    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:01:57.007400    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:01:57.018088    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:01:57.018160    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:01:57.029090    8523 logs.go:282] 0 containers: []
	W1008 11:01:57.029103    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:01:57.029174    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:01:57.039674    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:01:57.039690    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:01:57.039696    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:01:57.053539    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:01:57.053551    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:01:57.064881    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:01:57.064893    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:01:57.082173    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:01:57.082184    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:01:57.094286    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:01:57.094301    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:01:57.111951    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:01:57.111963    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:01:57.146968    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:01:57.146980    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:01:57.151153    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:01:57.151162    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:01:57.165417    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:01:57.165427    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:01:57.178022    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:01:57.178036    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:01:57.190407    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:01:57.190422    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:01:57.214476    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:01:57.214485    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:01:57.225947    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:01:57.225964    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:01:59.764630    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:04.766971    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:04.767168    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:04.785610    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:04.785712    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:04.800323    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:04.800409    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:04.819783    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:04.819856    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:04.830417    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:04.830491    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:04.840820    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:04.840887    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:04.851678    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:04.851745    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:04.860998    8523 logs.go:282] 0 containers: []
	W1008 11:02:04.861007    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:04.861061    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:04.870960    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:04.870974    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:04.870983    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:04.884892    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:04.884903    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:04.922475    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:04.922483    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:04.926901    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:04.926909    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:04.938906    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:04.938921    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:04.954586    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:04.954595    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:04.966567    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:04.966579    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:04.984053    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:04.984074    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:05.022451    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:05.022466    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:05.037428    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:05.037442    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:05.051745    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:05.051756    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:05.063896    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:05.063908    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:05.087153    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:05.087163    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:07.601956    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:12.603562    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:12.603770    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:12.622704    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:12.622810    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:12.636881    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:12.636956    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:12.647080    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:12.647163    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:12.658133    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:12.658216    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:12.669129    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:12.669214    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:12.679694    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:12.679774    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:12.689962    8523 logs.go:282] 0 containers: []
	W1008 11:02:12.689975    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:12.690033    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:12.700677    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:12.700692    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:12.700700    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:12.716056    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:12.716067    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:12.727494    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:12.727507    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:12.752711    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:12.752721    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:12.764357    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:12.764368    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:12.799335    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:12.799348    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:12.820501    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:12.820516    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:12.836610    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:12.836621    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:12.848606    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:12.848618    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:12.860361    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:12.860372    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:12.872318    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:12.872330    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:12.896050    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:12.896063    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:12.929856    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:12.929865    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:15.436379    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:20.438667    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:20.438782    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:20.452143    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:20.452237    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:20.463630    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:20.463713    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:20.479240    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:20.479322    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:20.490467    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:20.490546    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:20.506431    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:20.506501    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:20.517124    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:20.517199    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:20.527512    8523 logs.go:282] 0 containers: []
	W1008 11:02:20.527526    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:20.527587    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:20.539486    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:20.539502    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:20.539510    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:20.553739    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:20.553753    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:20.565873    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:20.565887    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:20.582577    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:20.582590    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:20.594900    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:20.594909    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:20.619246    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:20.619252    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:20.655342    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:20.655354    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:20.694872    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:20.694884    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:20.709534    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:20.709547    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:20.721542    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:20.721554    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:20.733852    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:20.733863    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:20.738804    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:20.738814    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:20.754050    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:20.754060    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:23.273110    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:28.275298    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:28.275408    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:28.287090    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:28.287183    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:28.298621    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:28.298688    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:28.309224    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:28.309307    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:28.320534    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:28.320612    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:28.331439    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:28.331518    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:28.342235    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:28.342309    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:28.354059    8523 logs.go:282] 0 containers: []
	W1008 11:02:28.354073    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:28.354140    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:28.365023    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:28.365039    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:28.365044    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:28.377287    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:28.377300    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:28.393072    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:28.393087    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:28.418857    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:28.418864    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:28.455618    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:28.455636    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:28.459942    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:28.459950    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:28.472156    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:28.472167    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:28.485773    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:28.485784    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:28.502037    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:28.502047    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:28.539889    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:28.539906    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:28.554900    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:28.554911    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:28.569863    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:28.569880    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:28.588419    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:28.588451    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:31.102632    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:36.104946    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:36.105170    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:36.125721    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:36.125820    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:36.140630    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:36.140709    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:36.152770    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:36.152842    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:36.164012    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:36.164086    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:36.175232    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:36.175299    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:36.186472    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:36.186539    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:36.197617    8523 logs.go:282] 0 containers: []
	W1008 11:02:36.197633    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:36.197698    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:36.209508    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:36.209526    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:36.209532    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:36.214529    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:36.214537    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:36.227123    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:36.227135    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:36.252027    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:36.252037    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:36.264018    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:36.264027    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:36.280754    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:36.280767    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:36.319502    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:36.319513    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:36.340099    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:36.340111    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:36.375059    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:36.375067    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:36.412058    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:36.412074    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:36.427953    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:36.427963    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:36.442521    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:36.442537    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:36.455436    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:36.455447    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:38.969429    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:43.971654    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:43.971766    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:43.984625    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:43.984725    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:43.996549    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:43.996658    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:44.008211    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:44.008297    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:44.019726    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:44.019805    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:44.030980    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:44.031066    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:44.042204    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:44.042280    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:44.053228    8523 logs.go:282] 0 containers: []
	W1008 11:02:44.053238    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:44.053299    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:44.064061    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:44.064077    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:44.064083    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:44.076330    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:44.076343    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:44.110577    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:44.110586    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:44.125204    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:44.125216    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:44.137522    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:44.137533    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:44.149594    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:44.149605    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:44.161639    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:44.161651    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:44.174138    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:44.174149    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:44.199347    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:44.199355    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:44.203615    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:44.203626    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:44.241776    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:44.241790    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:44.256917    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:44.256931    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:44.273515    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:44.273526    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:46.794677    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:51.796867    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:51.797056    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:51.811850    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:51.811942    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:51.823709    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:51.823785    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:51.835170    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:51.835252    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:51.846361    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:51.846440    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:51.857696    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:51.857771    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:51.868403    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:51.868478    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:51.879195    8523 logs.go:282] 0 containers: []
	W1008 11:02:51.879207    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:51.879267    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:51.889998    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:51.890014    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:51.890020    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:51.904783    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:51.904798    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:51.920543    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:51.920555    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:51.934145    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:51.934159    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:51.951968    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:51.951980    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:51.964418    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:51.964429    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:02:51.999730    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:51.999737    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:52.003635    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:52.003641    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:52.043390    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:52.043402    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:52.067202    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:52.067215    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:52.081650    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:52.081664    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:52.093883    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:52.093894    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:52.118183    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:52.118192    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:54.631204    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:02:59.633380    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:02:59.633527    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:02:59.646498    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:02:59.646578    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:02:59.658514    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:02:59.658596    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:02:59.668939    8523 logs.go:282] 2 containers: [20206e5bcfba 9c617b0a49df]
	I1008 11:02:59.669034    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:02:59.679450    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:02:59.679526    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:02:59.691378    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:02:59.691461    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:02:59.702402    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:02:59.702477    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:02:59.712670    8523 logs.go:282] 0 containers: []
	W1008 11:02:59.712685    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:02:59.712760    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:02:59.722996    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:02:59.723011    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:02:59.723018    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:02:59.737175    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:02:59.737189    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:02:59.748872    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:02:59.748884    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:02:59.763903    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:02:59.763912    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:02:59.776005    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:02:59.776020    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:02:59.800355    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:02:59.800363    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:02:59.814108    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:02:59.814118    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:02:59.818815    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:02:59.818823    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:02:59.854478    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:02:59.854490    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:02:59.868909    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:02:59.868919    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:02:59.888522    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:02:59.888535    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:02:59.914595    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:02:59.914613    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:02:59.961098    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:02:59.961116    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:02.505643    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:07.507337    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:07.507456    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:07.518110    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:03:07.518196    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:07.529634    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:03:07.529729    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:07.540471    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:03:07.540559    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:07.550634    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:03:07.550728    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:07.561470    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:03:07.561550    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:07.572305    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:03:07.572373    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:07.582554    8523 logs.go:282] 0 containers: []
	W1008 11:03:07.582566    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:07.582634    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:07.599687    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:03:07.599704    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:03:07.599710    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:03:07.611549    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:03:07.611565    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:07.626453    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:07.626466    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:07.632536    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:07.632547    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:07.666080    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:03:07.666095    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:03:07.680672    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:03:07.680686    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:03:07.694579    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:03:07.694593    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:03:07.706363    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:03:07.706379    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:03:07.718379    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:03:07.718390    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:03:07.729958    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:03:07.729970    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:03:07.742013    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:03:07.742025    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:03:07.755730    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:03:07.755741    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:03:07.773279    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:07.773290    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:07.809898    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:03:07.809907    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:03:07.824961    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:07.824972    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:10.350782    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:15.352266    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:15.352439    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:15.368073    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:03:15.368170    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:15.381095    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:03:15.381180    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:15.392356    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:03:15.392439    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:15.403084    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:03:15.403167    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:15.415429    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:03:15.415510    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:15.426042    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:03:15.426127    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:15.436407    8523 logs.go:282] 0 containers: []
	W1008 11:03:15.436418    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:15.436488    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:15.446687    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:03:15.446712    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:03:15.446718    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:03:15.458285    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:03:15.458300    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:03:15.475970    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:03:15.475981    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:03:15.489836    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:03:15.489848    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:03:15.501958    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:03:15.501972    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:03:15.517866    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:03:15.517875    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:03:15.530425    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:15.530433    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:15.534752    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:15.534759    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:15.569971    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:03:15.569983    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:03:15.587471    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:03:15.587484    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:03:15.603094    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:15.603106    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:15.639508    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:03:15.639518    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:03:15.651972    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:15.651984    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:15.677432    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:03:15.677445    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:15.689857    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:03:15.689871    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:03:18.204241    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:23.206526    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:23.206662    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:23.218963    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:03:23.219057    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:23.230204    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:03:23.230277    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:23.241481    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:03:23.241556    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:23.256819    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:03:23.256894    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:23.268130    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:03:23.268210    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:23.278905    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:03:23.278987    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:23.289980    8523 logs.go:282] 0 containers: []
	W1008 11:03:23.289995    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:23.290061    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:23.300858    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:03:23.300879    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:23.300886    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:23.336883    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:03:23.336894    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:03:23.355597    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:03:23.355609    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:23.367495    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:23.367508    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:23.392809    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:03:23.392819    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:03:23.406997    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:03:23.407008    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:03:23.418583    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:03:23.418596    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:03:23.431194    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:03:23.431206    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:03:23.442742    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:23.442756    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:23.446940    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:03:23.446949    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:03:23.458038    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:03:23.458048    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:03:23.470091    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:03:23.470105    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:03:23.488726    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:23.488738    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:23.525044    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:03:23.525051    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:03:23.539934    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:03:23.539948    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:03:26.054383    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:31.056721    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:31.056897    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:31.069600    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:03:31.069682    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:31.081315    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:03:31.081396    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:31.092292    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:03:31.092373    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:31.102441    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:03:31.102521    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:31.113143    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:03:31.113221    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:31.131277    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:03:31.131351    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:31.141697    8523 logs.go:282] 0 containers: []
	W1008 11:03:31.141712    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:31.141784    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:31.159261    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:03:31.159279    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:03:31.159285    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:03:31.171328    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:03:31.171339    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:03:31.193261    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:03:31.193272    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:31.205471    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:03:31.205484    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:03:31.220363    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:03:31.220376    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:03:31.234130    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:31.234142    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:31.269908    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:03:31.269920    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:03:31.282221    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:31.282233    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:31.318239    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:31.318247    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:31.322719    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:03:31.322729    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:03:31.334470    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:03:31.334481    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:03:31.346125    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:03:31.346137    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:03:31.361039    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:31.361048    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:31.385204    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:03:31.385213    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:03:31.396397    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:03:31.396409    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:03:33.909749    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:38.912036    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:38.912185    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:38.926152    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:03:38.926240    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:38.937945    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:03:38.938027    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:38.948255    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:03:38.948344    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:38.959255    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:03:38.959335    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:38.972110    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:03:38.972180    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:38.984501    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:03:38.984583    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:38.994579    8523 logs.go:282] 0 containers: []
	W1008 11:03:38.994593    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:38.994655    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:39.005140    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:03:39.005159    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:39.005166    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:39.040445    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:03:39.040457    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:03:39.055334    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:03:39.055345    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:03:39.069413    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:03:39.069424    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:03:39.082656    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:03:39.082672    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:03:39.094542    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:39.094553    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:39.119820    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:03:39.119827    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:39.140043    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:39.140055    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:39.175715    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:39.175723    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:39.179877    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:03:39.179883    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:03:39.196910    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:03:39.196922    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:03:39.208273    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:03:39.208284    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:03:39.219688    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:03:39.219701    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:03:39.231698    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:03:39.231713    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:03:39.246073    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:03:39.246085    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:03:41.766720    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:46.767268    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:46.767457    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:46.778680    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:03:46.778760    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:46.788998    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:03:46.789076    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:46.803096    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:03:46.803180    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:46.813568    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:03:46.813633    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:46.824382    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:03:46.824448    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:46.834853    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:03:46.834932    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:46.848110    8523 logs.go:282] 0 containers: []
	W1008 11:03:46.848123    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:46.848189    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:46.858305    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:03:46.858322    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:46.858328    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:46.892511    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:03:46.892524    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:03:46.904331    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:03:46.904345    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:03:46.921933    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:03:46.921947    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:03:46.935948    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:03:46.935959    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:03:46.949184    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:03:46.949195    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:46.962478    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:03:46.962490    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:03:46.974492    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:03:46.974504    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:03:46.986340    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:03:46.986353    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:03:47.001237    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:03:47.001247    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:03:47.012931    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:03:47.012944    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:03:47.028211    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:03:47.028222    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:03:47.039604    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:47.039616    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:47.063016    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:47.063023    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:47.067076    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:47.067083    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:49.607413    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:03:54.609821    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:03:54.610347    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:03:54.653662    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:03:54.653829    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:03:54.674439    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:03:54.674575    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:03:54.689989    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:03:54.690082    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:03:54.702063    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:03:54.702151    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:03:54.713076    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:03:54.713156    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:03:54.724320    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:03:54.724402    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:03:54.736745    8523 logs.go:282] 0 containers: []
	W1008 11:03:54.736758    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:03:54.736829    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:03:54.752317    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:03:54.752335    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:03:54.752340    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:03:54.787006    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:03:54.787016    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:03:54.827240    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:03:54.827254    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:03:54.842774    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:03:54.842813    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:03:54.857075    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:03:54.857087    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:03:54.869712    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:03:54.869725    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:03:54.887718    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:03:54.887730    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:03:54.892480    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:03:54.892489    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:03:54.907557    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:03:54.907569    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:03:54.919858    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:03:54.919869    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:03:54.935696    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:03:54.935707    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:03:54.947656    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:03:54.947665    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:03:54.967110    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:03:54.967123    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:03:54.979515    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:03:54.979525    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:03:55.004941    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:03:55.004950    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:03:57.519335    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:02.521670    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:02.521825    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:02.534352    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:04:02.534438    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:02.544882    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:04:02.544949    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:02.555725    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:04:02.555807    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:02.565999    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:04:02.566098    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:02.576796    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:04:02.576869    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:02.587892    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:04:02.587964    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:02.599203    8523 logs.go:282] 0 containers: []
	W1008 11:04:02.599215    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:02.599280    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:02.609611    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:04:02.609627    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:04:02.609635    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:04:02.623863    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:04:02.623873    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:04:02.637960    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:04:02.637971    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:02.649867    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:04:02.649879    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:04:02.661984    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:04:02.661995    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:04:02.674133    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:04:02.674145    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:04:02.686536    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:02.686547    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:02.711830    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:04:02.711838    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:04:02.723333    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:04:02.723345    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:04:02.740656    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:02.740666    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:02.776430    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:02.776444    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:02.784600    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:02.784614    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:02.823140    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:04:02.823152    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:04:02.845902    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:04:02.845916    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:04:02.858318    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:04:02.858330    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:04:05.375422    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:10.377726    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:10.377898    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:10.392386    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:04:10.392483    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:10.403532    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:04:10.403609    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:10.413985    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:04:10.414067    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:10.428020    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:04:10.428104    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:10.438527    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:04:10.438593    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:10.449290    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:04:10.449370    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:10.463156    8523 logs.go:282] 0 containers: []
	W1008 11:04:10.463169    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:10.463238    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:10.473468    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:04:10.473486    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:04:10.473491    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:04:10.487381    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:04:10.487393    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:04:10.502311    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:04:10.502325    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:10.514407    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:04:10.514424    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:04:10.528569    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:04:10.528582    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:04:10.540406    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:04:10.540420    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:04:10.555699    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:04:10.555708    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:04:10.567089    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:10.567102    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:10.571367    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:10.571374    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:10.606231    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:04:10.606245    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:04:10.620659    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:10.620676    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:10.645832    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:10.645846    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:10.679477    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:04:10.679486    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:04:10.691422    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:04:10.691433    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:04:10.704162    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:04:10.704173    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:04:13.223593    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:18.225881    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:18.226042    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:18.241985    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:04:18.242079    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:18.254050    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:04:18.254132    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:18.264801    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:04:18.264882    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:18.276440    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:04:18.276517    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:18.286946    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:04:18.287016    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:18.297369    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:04:18.297450    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:18.307757    8523 logs.go:282] 0 containers: []
	W1008 11:04:18.307769    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:18.307833    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:18.318898    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:04:18.318917    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:04:18.318922    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:04:18.333752    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:18.333764    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:18.357490    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:18.357498    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:18.361940    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:04:18.361948    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:04:18.373547    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:04:18.373557    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:04:18.385803    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:04:18.385815    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:04:18.401513    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:04:18.401524    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:04:18.419388    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:18.419399    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:18.453982    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:04:18.453995    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:04:18.465833    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:18.465845    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:18.502285    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:04:18.502294    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:04:18.516601    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:04:18.516613    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:18.528950    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:04:18.528960    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:04:18.547385    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:04:18.547394    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:04:18.559278    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:04:18.559289    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:04:21.072853    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:26.075105    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:26.075230    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:26.090381    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:04:26.090458    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:26.100958    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:04:26.101039    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:26.111635    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:04:26.111717    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:26.122282    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:04:26.122361    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:26.132948    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:04:26.133022    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:26.143296    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:04:26.143377    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:26.153829    8523 logs.go:282] 0 containers: []
	W1008 11:04:26.153839    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:26.153903    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:26.164641    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:04:26.164657    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:04:26.164663    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:04:26.183625    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:04:26.183635    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:04:26.195088    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:26.195100    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:26.219576    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:26.219588    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:26.224170    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:04:26.224179    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:04:26.240304    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:04:26.240316    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:04:26.254710    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:04:26.254721    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:04:26.266434    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:26.266448    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:26.300499    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:26.300508    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:26.338105    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:04:26.338117    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:04:26.350239    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:04:26.350249    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:04:26.367389    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:04:26.367399    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:04:26.381377    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:04:26.381389    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:04:26.393317    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:04:26.393327    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:04:26.405286    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:04:26.405298    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:28.919281    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:33.921522    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:33.921666    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:33.936144    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:04:33.936236    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:33.950099    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:04:33.950180    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:33.960831    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:04:33.960914    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:33.971943    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:04:33.972017    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:33.982973    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:04:33.983041    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:33.993415    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:04:33.993485    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:34.003970    8523 logs.go:282] 0 containers: []
	W1008 11:04:34.004007    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:34.004077    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:34.020282    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:04:34.020300    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:04:34.020307    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:04:34.033108    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:34.033118    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:34.057493    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:34.057502    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:34.092156    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:34.092164    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:34.132805    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:04:34.132817    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:04:34.148469    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:04:34.148486    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:04:34.160549    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:04:34.160562    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:04:34.172305    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:04:34.172320    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:34.184263    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:04:34.184275    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:04:34.198636    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:04:34.198650    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:04:34.210996    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:04:34.211007    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:04:34.226574    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:04:34.226588    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:04:34.246888    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:34.246901    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:34.251415    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:04:34.251422    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:04:34.263842    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:04:34.263852    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:04:36.781562    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:41.728239    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:41.728547    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1008 11:04:41.755113    8523 logs.go:282] 1 containers: [19e9db2e1dc4]
	I1008 11:04:41.755259    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1008 11:04:41.777122    8523 logs.go:282] 1 containers: [984ecc4fe36b]
	I1008 11:04:41.777225    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1008 11:04:41.789850    8523 logs.go:282] 4 containers: [eae35ff231e8 593aff630348 20206e5bcfba 9c617b0a49df]
	I1008 11:04:41.789934    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1008 11:04:41.801298    8523 logs.go:282] 1 containers: [816c15dd5231]
	I1008 11:04:41.801367    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1008 11:04:41.811985    8523 logs.go:282] 1 containers: [6757561915f4]
	I1008 11:04:41.812062    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1008 11:04:41.822749    8523 logs.go:282] 1 containers: [92d3400fc096]
	I1008 11:04:41.822826    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1008 11:04:41.832866    8523 logs.go:282] 0 containers: []
	W1008 11:04:41.832880    8523 logs.go:284] No container was found matching "kindnet"
	I1008 11:04:41.832934    8523 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1008 11:04:41.843806    8523 logs.go:282] 1 containers: [934257e6d8ff]
	I1008 11:04:41.843823    8523 logs.go:123] Gathering logs for etcd [984ecc4fe36b] ...
	I1008 11:04:41.843828    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 984ecc4fe36b"
	I1008 11:04:41.866377    8523 logs.go:123] Gathering logs for coredns [9c617b0a49df] ...
	I1008 11:04:41.866389    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9c617b0a49df"
	I1008 11:04:41.878669    8523 logs.go:123] Gathering logs for kube-scheduler [816c15dd5231] ...
	I1008 11:04:41.878682    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 816c15dd5231"
	I1008 11:04:41.895532    8523 logs.go:123] Gathering logs for container status ...
	I1008 11:04:41.895543    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1008 11:04:41.907561    8523 logs.go:123] Gathering logs for kubelet ...
	I1008 11:04:41.907572    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1008 11:04:41.942311    8523 logs.go:123] Gathering logs for coredns [593aff630348] ...
	I1008 11:04:41.942331    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 593aff630348"
	I1008 11:04:41.964985    8523 logs.go:123] Gathering logs for kube-proxy [6757561915f4] ...
	I1008 11:04:41.965002    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6757561915f4"
	I1008 11:04:41.977425    8523 logs.go:123] Gathering logs for kube-apiserver [19e9db2e1dc4] ...
	I1008 11:04:41.977436    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 19e9db2e1dc4"
	I1008 11:04:41.992484    8523 logs.go:123] Gathering logs for coredns [eae35ff231e8] ...
	I1008 11:04:41.992496    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 eae35ff231e8"
	I1008 11:04:42.008632    8523 logs.go:123] Gathering logs for coredns [20206e5bcfba] ...
	I1008 11:04:42.008642    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 20206e5bcfba"
	I1008 11:04:42.020609    8523 logs.go:123] Gathering logs for kube-controller-manager [92d3400fc096] ...
	I1008 11:04:42.020620    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 92d3400fc096"
	I1008 11:04:42.038555    8523 logs.go:123] Gathering logs for storage-provisioner [934257e6d8ff] ...
	I1008 11:04:42.038566    8523 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 934257e6d8ff"
	I1008 11:04:42.050189    8523 logs.go:123] Gathering logs for Docker ...
	I1008 11:04:42.050200    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1008 11:04:42.074472    8523 logs.go:123] Gathering logs for dmesg ...
	I1008 11:04:42.074479    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1008 11:04:42.078754    8523 logs.go:123] Gathering logs for describe nodes ...
	I1008 11:04:42.078762    8523 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1008 11:04:44.614753    8523 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I1008 11:04:49.617024    8523 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1008 11:04:49.621449    8523 out.go:201] 
	W1008 11:04:49.625319    8523 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W1008 11:04:49.625325    8523 out.go:270] * 
	* 
	W1008 11:04:49.626338    8523 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:04:49.637327    8523 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-810000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (608.51s)

                                                
                                    
x
+
TestPause/serial/Start (10.25s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-170000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-170000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (10.178142583s)

                                                
                                                
-- stdout --
	* [pause-170000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-170000" primary control-plane node in "pause-170000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-170000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-170000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-170000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-170000 -n pause-170000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-170000 -n pause-170000: exit status 7 (70.640584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-170000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (10.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-490000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-490000 --driver=qemu2 : exit status 80 (9.994587958s)

                                                
                                                
-- stdout --
	* [NoKubernetes-490000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-490000" primary control-plane node in "NoKubernetes-490000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-490000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-490000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-490000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-490000 -n NoKubernetes-490000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-490000 -n NoKubernetes-490000: exit status 7 (64.541917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-490000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (6.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-490000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-490000 --no-kubernetes --driver=qemu2 : exit status 80 (5.920830167s)

                                                
                                                
-- stdout --
	* [NoKubernetes-490000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-490000
	* Restarting existing qemu2 VM for "NoKubernetes-490000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-490000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-490000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-490000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-490000 -n NoKubernetes-490000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-490000 -n NoKubernetes-490000: exit status 7 (84.08225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-490000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (6.01s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.84s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (1.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-490000 --no-kubernetes --driver=qemu2 
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19774
- KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3016916312/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-490000 --no-kubernetes --driver=qemu2 : exit status 80 (6.400964666s)

                                                
                                                
-- stdout --
	* [NoKubernetes-490000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-490000
	* Restarting existing qemu2 VM for "NoKubernetes-490000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-490000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-490000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-490000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-490000 -n NoKubernetes-490000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-490000 -n NoKubernetes-490000: exit status 7 (78.183458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-490000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (6.48s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.35s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.34.0 on darwin (arm64)
- MINIKUBE_LOCATION=19774
- KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current2809481878/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (1.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-490000 --driver=qemu2 
I1008 11:05:54.368197    6907 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/002/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10979e3c0 0x10979e3c0 0x10979e3c0 0x10979e3c0 0x10979e3c0 0x10979e3c0 0x10979e3c0] Decompressors:map[bz2:0x14000835650 gz:0x14000835658 tar:0x14000835600 tar.bz2:0x14000835610 tar.gz:0x14000835620 tar.xz:0x14000835630 tar.zst:0x14000835640 tbz2:0x14000835610 tgz:0x14000835620 txz:0x14000835630 tzst:0x14000835640 xz:0x14000835660 zip:0x14000835670 zst:0x14000835668] Getters:map[file:0x140015f2660 http:0x1400082d4a0 https:0x1400082d4f0] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1008 11:05:54.368245    6907 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/002/docker-machine-driver-hyperkit
I1008 11:05:57.026292    6907 install.go:79] stdout: 
W1008 11:05:57.026454    6907 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/002/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/002/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1008 11:05:57.026484    6907 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/002/docker-machine-driver-hyperkit]
I1008 11:05:57.043128    6907 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/002/docker-machine-driver-hyperkit]
I1008 11:05:57.056248    6907 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/002/docker-machine-driver-hyperkit]
I1008 11:05:57.066961    6907 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/002/docker-machine-driver-hyperkit]
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-490000 --driver=qemu2 : exit status 80 (5.839760708s)

                                                
                                                
-- stdout --
	* [NoKubernetes-490000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-490000
	* Restarting existing qemu2 VM for "NoKubernetes-490000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-490000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-490000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-490000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-490000 -n NoKubernetes-490000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-490000 -n NoKubernetes-490000: exit status 7 (74.581042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-490000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.983827125s)

                                                
                                                
-- stdout --
	* [auto-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-446000" primary control-plane node in "auto-446000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-446000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:06:27.486014    9213 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:06:27.486172    9213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:06:27.486175    9213 out.go:358] Setting ErrFile to fd 2...
	I1008 11:06:27.486177    9213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:06:27.486320    9213 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:06:27.487463    9213 out.go:352] Setting JSON to false
	I1008 11:06:27.505196    9213 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5757,"bootTime":1728405030,"procs":562,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:06:27.505261    9213 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:06:27.510678    9213 out.go:177] * [auto-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:06:27.517503    9213 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:06:27.517554    9213 notify.go:220] Checking for updates...
	I1008 11:06:27.524404    9213 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:06:27.527460    9213 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:06:27.530486    9213 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:06:27.533466    9213 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:06:27.536457    9213 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:06:27.539810    9213 config.go:182] Loaded profile config "cert-expiration-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:06:27.539886    9213 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:06:27.539929    9213 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:06:27.544439    9213 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:06:27.551447    9213 start.go:297] selected driver: qemu2
	I1008 11:06:27.551455    9213 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:06:27.551462    9213 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:06:27.553925    9213 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:06:27.557356    9213 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:06:27.560480    9213 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:06:27.560497    9213 cni.go:84] Creating CNI manager for ""
	I1008 11:06:27.560520    9213 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:06:27.560529    9213 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 11:06:27.560555    9213 start.go:340] cluster config:
	{Name:auto-446000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:auto-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:06:27.565183    9213 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:06:27.573445    9213 out.go:177] * Starting "auto-446000" primary control-plane node in "auto-446000" cluster
	I1008 11:06:27.577409    9213 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:06:27.577422    9213 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:06:27.577430    9213 cache.go:56] Caching tarball of preloaded images
	I1008 11:06:27.577500    9213 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:06:27.577506    9213 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:06:27.577565    9213 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/auto-446000/config.json ...
	I1008 11:06:27.577575    9213 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/auto-446000/config.json: {Name:mk25862a6b760b6bb4adc16aba6a16a5bf23ca8e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:06:27.577941    9213 start.go:360] acquireMachinesLock for auto-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:06:27.577992    9213 start.go:364] duration metric: took 44.541µs to acquireMachinesLock for "auto-446000"
	I1008 11:06:27.578003    9213 start.go:93] Provisioning new machine with config: &{Name:auto-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:06:27.578034    9213 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:06:27.581409    9213 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:06:27.598728    9213 start.go:159] libmachine.API.Create for "auto-446000" (driver="qemu2")
	I1008 11:06:27.598764    9213 client.go:168] LocalClient.Create starting
	I1008 11:06:27.598838    9213 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:06:27.598875    9213 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:27.598886    9213 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:27.598932    9213 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:06:27.598963    9213 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:27.598973    9213 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:27.599352    9213 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:06:27.743038    9213 main.go:141] libmachine: Creating SSH key...
	I1008 11:06:27.878811    9213 main.go:141] libmachine: Creating Disk image...
	I1008 11:06:27.878819    9213 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:06:27.879011    9213 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/disk.qcow2
	I1008 11:06:27.888873    9213 main.go:141] libmachine: STDOUT: 
	I1008 11:06:27.888887    9213 main.go:141] libmachine: STDERR: 
	I1008 11:06:27.888945    9213 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/disk.qcow2 +20000M
	I1008 11:06:27.897317    9213 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:06:27.897331    9213 main.go:141] libmachine: STDERR: 
	I1008 11:06:27.897347    9213 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/disk.qcow2
	I1008 11:06:27.897355    9213 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:06:27.897367    9213 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:06:27.897411    9213 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=5a:51:c1:3a:38:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/disk.qcow2
	I1008 11:06:27.899216    9213 main.go:141] libmachine: STDOUT: 
	I1008 11:06:27.899230    9213 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:06:27.899248    9213 client.go:171] duration metric: took 300.485ms to LocalClient.Create
	I1008 11:06:29.901460    9213 start.go:128] duration metric: took 2.323433333s to createHost
	I1008 11:06:29.901564    9213 start.go:83] releasing machines lock for "auto-446000", held for 2.323602833s
	W1008 11:06:29.901623    9213 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:06:29.914780    9213 out.go:177] * Deleting "auto-446000" in qemu2 ...
	W1008 11:06:29.937827    9213 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:06:29.937858    9213 start.go:729] Will try again in 5 seconds ...
	I1008 11:06:34.939971    9213 start.go:360] acquireMachinesLock for auto-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:06:34.940556    9213 start.go:364] duration metric: took 498.167µs to acquireMachinesLock for "auto-446000"
	I1008 11:06:34.940721    9213 start.go:93] Provisioning new machine with config: &{Name:auto-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.1 ClusterName:auto-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:06:34.940968    9213 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:06:34.953537    9213 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:06:35.002339    9213 start.go:159] libmachine.API.Create for "auto-446000" (driver="qemu2")
	I1008 11:06:35.002402    9213 client.go:168] LocalClient.Create starting
	I1008 11:06:35.002536    9213 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:06:35.002611    9213 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:35.002625    9213 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:35.002686    9213 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:06:35.002742    9213 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:35.002769    9213 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:35.003349    9213 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:06:35.158352    9213 main.go:141] libmachine: Creating SSH key...
	I1008 11:06:35.370147    9213 main.go:141] libmachine: Creating Disk image...
	I1008 11:06:35.370165    9213 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:06:35.370400    9213 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/disk.qcow2
	I1008 11:06:35.380828    9213 main.go:141] libmachine: STDOUT: 
	I1008 11:06:35.380849    9213 main.go:141] libmachine: STDERR: 
	I1008 11:06:35.380905    9213 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/disk.qcow2 +20000M
	I1008 11:06:35.389412    9213 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:06:35.389428    9213 main.go:141] libmachine: STDERR: 
	I1008 11:06:35.389444    9213 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/disk.qcow2
	I1008 11:06:35.389450    9213 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:06:35.389459    9213 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:06:35.389487    9213 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:d7:fc:85:ec:77 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/auto-446000/disk.qcow2
	I1008 11:06:35.391287    9213 main.go:141] libmachine: STDOUT: 
	I1008 11:06:35.391308    9213 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:06:35.391321    9213 client.go:171] duration metric: took 388.919375ms to LocalClient.Create
	I1008 11:06:37.393451    9213 start.go:128] duration metric: took 2.452499625s to createHost
	I1008 11:06:37.393583    9213 start.go:83] releasing machines lock for "auto-446000", held for 2.453041167s
	W1008 11:06:37.393903    9213 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p auto-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:06:37.405553    9213 out.go:201] 
	W1008 11:06:37.409597    9213 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:06:37.409631    9213 out.go:270] * 
	* 
	W1008 11:06:37.412162    9213 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:06:37.421527    9213 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.770217042s)

                                                
                                                
-- stdout --
	* [kindnet-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-446000" primary control-plane node in "kindnet-446000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-446000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:06:39.833467    9325 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:06:39.833623    9325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:06:39.833627    9325 out.go:358] Setting ErrFile to fd 2...
	I1008 11:06:39.833629    9325 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:06:39.833754    9325 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:06:39.834881    9325 out.go:352] Setting JSON to false
	I1008 11:06:39.852522    9325 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5769,"bootTime":1728405030,"procs":565,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:06:39.852594    9325 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:06:39.857712    9325 out.go:177] * [kindnet-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:06:39.864663    9325 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:06:39.864716    9325 notify.go:220] Checking for updates...
	I1008 11:06:39.871713    9325 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:06:39.874656    9325 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:06:39.877682    9325 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:06:39.880730    9325 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:06:39.883690    9325 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:06:39.887040    9325 config.go:182] Loaded profile config "cert-expiration-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:06:39.887120    9325 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:06:39.887169    9325 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:06:39.891677    9325 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:06:39.898664    9325 start.go:297] selected driver: qemu2
	I1008 11:06:39.898672    9325 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:06:39.898679    9325 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:06:39.901188    9325 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:06:39.904706    9325 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:06:39.907718    9325 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:06:39.907739    9325 cni.go:84] Creating CNI manager for "kindnet"
	I1008 11:06:39.907743    9325 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1008 11:06:39.907792    9325 start.go:340] cluster config:
	{Name:kindnet-446000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kindnet-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:06:39.912408    9325 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:06:39.920696    9325 out.go:177] * Starting "kindnet-446000" primary control-plane node in "kindnet-446000" cluster
	I1008 11:06:39.924700    9325 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:06:39.924718    9325 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:06:39.924727    9325 cache.go:56] Caching tarball of preloaded images
	I1008 11:06:39.924820    9325 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:06:39.924826    9325 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:06:39.924901    9325 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/kindnet-446000/config.json ...
	I1008 11:06:39.924912    9325 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/kindnet-446000/config.json: {Name:mkfe07401e2adbf6ac2b5b6c7bfac668cba369ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:06:39.925287    9325 start.go:360] acquireMachinesLock for kindnet-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:06:39.925338    9325 start.go:364] duration metric: took 45.291µs to acquireMachinesLock for "kindnet-446000"
	I1008 11:06:39.925349    9325 start.go:93] Provisioning new machine with config: &{Name:kindnet-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:06:39.925373    9325 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:06:39.933657    9325 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:06:39.951454    9325 start.go:159] libmachine.API.Create for "kindnet-446000" (driver="qemu2")
	I1008 11:06:39.951481    9325 client.go:168] LocalClient.Create starting
	I1008 11:06:39.951556    9325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:06:39.951596    9325 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:39.951609    9325 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:39.951655    9325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:06:39.951687    9325 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:39.951696    9325 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:39.952077    9325 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:06:40.096573    9325 main.go:141] libmachine: Creating SSH key...
	I1008 11:06:40.161139    9325 main.go:141] libmachine: Creating Disk image...
	I1008 11:06:40.161145    9325 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:06:40.161358    9325 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/disk.qcow2
	I1008 11:06:40.171168    9325 main.go:141] libmachine: STDOUT: 
	I1008 11:06:40.171189    9325 main.go:141] libmachine: STDERR: 
	I1008 11:06:40.171244    9325 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/disk.qcow2 +20000M
	I1008 11:06:40.179656    9325 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:06:40.179671    9325 main.go:141] libmachine: STDERR: 
	I1008 11:06:40.179689    9325 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/disk.qcow2
	I1008 11:06:40.179695    9325 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:06:40.179708    9325 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:06:40.179743    9325 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:7e:f8:fe:de:6a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/disk.qcow2
	I1008 11:06:40.181557    9325 main.go:141] libmachine: STDOUT: 
	I1008 11:06:40.181573    9325 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:06:40.181590    9325 client.go:171] duration metric: took 230.106209ms to LocalClient.Create
	I1008 11:06:42.183803    9325 start.go:128] duration metric: took 2.25843775s to createHost
	I1008 11:06:42.183873    9325 start.go:83] releasing machines lock for "kindnet-446000", held for 2.258564s
	W1008 11:06:42.183925    9325 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:06:42.198026    9325 out.go:177] * Deleting "kindnet-446000" in qemu2 ...
	W1008 11:06:42.222002    9325 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:06:42.222035    9325 start.go:729] Will try again in 5 seconds ...
	I1008 11:06:47.224191    9325 start.go:360] acquireMachinesLock for kindnet-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:06:47.224774    9325 start.go:364] duration metric: took 485.5µs to acquireMachinesLock for "kindnet-446000"
	I1008 11:06:47.224902    9325 start.go:93] Provisioning new machine with config: &{Name:kindnet-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kindnet-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:06:47.225111    9325 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:06:47.239908    9325 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:06:47.290022    9325 start.go:159] libmachine.API.Create for "kindnet-446000" (driver="qemu2")
	I1008 11:06:47.290078    9325 client.go:168] LocalClient.Create starting
	I1008 11:06:47.290222    9325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:06:47.290313    9325 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:47.290328    9325 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:47.290397    9325 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:06:47.290454    9325 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:47.290471    9325 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:47.291046    9325 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:06:47.449876    9325 main.go:141] libmachine: Creating SSH key...
	I1008 11:06:47.501739    9325 main.go:141] libmachine: Creating Disk image...
	I1008 11:06:47.501745    9325 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:06:47.501939    9325 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/disk.qcow2
	I1008 11:06:47.511791    9325 main.go:141] libmachine: STDOUT: 
	I1008 11:06:47.511816    9325 main.go:141] libmachine: STDERR: 
	I1008 11:06:47.511871    9325 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/disk.qcow2 +20000M
	I1008 11:06:47.520258    9325 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:06:47.520275    9325 main.go:141] libmachine: STDERR: 
	I1008 11:06:47.520287    9325 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/disk.qcow2
	I1008 11:06:47.520292    9325 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:06:47.520305    9325 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:06:47.520345    9325 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b2:85:10:13:b1:10 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kindnet-446000/disk.qcow2
	I1008 11:06:47.522151    9325 main.go:141] libmachine: STDOUT: 
	I1008 11:06:47.522167    9325 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:06:47.522181    9325 client.go:171] duration metric: took 232.102208ms to LocalClient.Create
	I1008 11:06:49.524312    9325 start.go:128] duration metric: took 2.29921075s to createHost
	I1008 11:06:49.524428    9325 start.go:83] releasing machines lock for "kindnet-446000", held for 2.299669042s
	W1008 11:06:49.524796    9325 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:06:49.538313    9325 out.go:201] 
	W1008 11:06:49.543493    9325 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:06:49.543518    9325 out.go:270] * 
	* 
	W1008 11:06:49.546019    9325 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:06:49.556272    9325 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.82722375s)

                                                
                                                
-- stdout --
	* [calico-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-446000" primary control-plane node in "calico-446000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-446000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:06:52.054535    9440 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:06:52.054710    9440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:06:52.054713    9440 out.go:358] Setting ErrFile to fd 2...
	I1008 11:06:52.054716    9440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:06:52.054857    9440 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:06:52.056014    9440 out.go:352] Setting JSON to false
	I1008 11:06:52.075087    9440 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5782,"bootTime":1728405030,"procs":566,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:06:52.075164    9440 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:06:52.079788    9440 out.go:177] * [calico-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:06:52.086641    9440 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:06:52.086686    9440 notify.go:220] Checking for updates...
	I1008 11:06:52.093565    9440 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:06:52.096537    9440 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:06:52.099551    9440 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:06:52.102605    9440 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:06:52.105594    9440 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:06:52.108893    9440 config.go:182] Loaded profile config "cert-expiration-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:06:52.108967    9440 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:06:52.109019    9440 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:06:52.113550    9440 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:06:52.120520    9440 start.go:297] selected driver: qemu2
	I1008 11:06:52.120528    9440 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:06:52.120534    9440 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:06:52.123058    9440 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:06:52.126490    9440 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:06:52.129672    9440 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:06:52.129699    9440 cni.go:84] Creating CNI manager for "calico"
	I1008 11:06:52.129704    9440 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1008 11:06:52.129734    9440 start.go:340] cluster config:
	{Name:calico-446000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:calico-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:06:52.134615    9440 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:06:52.142537    9440 out.go:177] * Starting "calico-446000" primary control-plane node in "calico-446000" cluster
	I1008 11:06:52.145511    9440 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:06:52.145528    9440 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:06:52.145540    9440 cache.go:56] Caching tarball of preloaded images
	I1008 11:06:52.145630    9440 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:06:52.145636    9440 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:06:52.145710    9440 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/calico-446000/config.json ...
	I1008 11:06:52.145721    9440 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/calico-446000/config.json: {Name:mk89e8f4ddaa193bc56133f2da144a70c6c990dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:06:52.146028    9440 start.go:360] acquireMachinesLock for calico-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:06:52.146079    9440 start.go:364] duration metric: took 44.667µs to acquireMachinesLock for "calico-446000"
	I1008 11:06:52.146090    9440 start.go:93] Provisioning new machine with config: &{Name:calico-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:06:52.146124    9440 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:06:52.149602    9440 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:06:52.167623    9440 start.go:159] libmachine.API.Create for "calico-446000" (driver="qemu2")
	I1008 11:06:52.167666    9440 client.go:168] LocalClient.Create starting
	I1008 11:06:52.167754    9440 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:06:52.167795    9440 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:52.167810    9440 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:52.167856    9440 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:06:52.167892    9440 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:52.167900    9440 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:52.168352    9440 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:06:52.315453    9440 main.go:141] libmachine: Creating SSH key...
	I1008 11:06:52.444832    9440 main.go:141] libmachine: Creating Disk image...
	I1008 11:06:52.444839    9440 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:06:52.445047    9440 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/disk.qcow2
	I1008 11:06:52.454990    9440 main.go:141] libmachine: STDOUT: 
	I1008 11:06:52.455004    9440 main.go:141] libmachine: STDERR: 
	I1008 11:06:52.455065    9440 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/disk.qcow2 +20000M
	I1008 11:06:52.463515    9440 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:06:52.463530    9440 main.go:141] libmachine: STDERR: 
	I1008 11:06:52.463553    9440 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/disk.qcow2
	I1008 11:06:52.463559    9440 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:06:52.463569    9440 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:06:52.463597    9440 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:e5:33:5c:10:8a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/disk.qcow2
	I1008 11:06:52.465387    9440 main.go:141] libmachine: STDOUT: 
	I1008 11:06:52.465399    9440 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:06:52.465424    9440 client.go:171] duration metric: took 297.758042ms to LocalClient.Create
	I1008 11:06:54.467582    9440 start.go:128] duration metric: took 2.321465875s to createHost
	I1008 11:06:54.467647    9440 start.go:83] releasing machines lock for "calico-446000", held for 2.321599334s
	W1008 11:06:54.467703    9440 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:06:54.481589    9440 out.go:177] * Deleting "calico-446000" in qemu2 ...
	W1008 11:06:54.508525    9440 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:06:54.508556    9440 start.go:729] Will try again in 5 seconds ...
	I1008 11:06:59.510646    9440 start.go:360] acquireMachinesLock for calico-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:06:59.511161    9440 start.go:364] duration metric: took 391.958µs to acquireMachinesLock for "calico-446000"
	I1008 11:06:59.511275    9440 start.go:93] Provisioning new machine with config: &{Name:calico-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:calico-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:06:59.511553    9440 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:06:59.524333    9440 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:06:59.573018    9440 start.go:159] libmachine.API.Create for "calico-446000" (driver="qemu2")
	I1008 11:06:59.573073    9440 client.go:168] LocalClient.Create starting
	I1008 11:06:59.573187    9440 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:06:59.573262    9440 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:59.573282    9440 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:59.573341    9440 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:06:59.573406    9440 main.go:141] libmachine: Decoding PEM data...
	I1008 11:06:59.573420    9440 main.go:141] libmachine: Parsing certificate...
	I1008 11:06:59.574063    9440 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:06:59.732411    9440 main.go:141] libmachine: Creating SSH key...
	I1008 11:06:59.783490    9440 main.go:141] libmachine: Creating Disk image...
	I1008 11:06:59.783495    9440 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:06:59.783696    9440 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/disk.qcow2
	I1008 11:06:59.793540    9440 main.go:141] libmachine: STDOUT: 
	I1008 11:06:59.793563    9440 main.go:141] libmachine: STDERR: 
	I1008 11:06:59.793617    9440 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/disk.qcow2 +20000M
	I1008 11:06:59.802148    9440 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:06:59.802162    9440 main.go:141] libmachine: STDERR: 
	I1008 11:06:59.802179    9440 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/disk.qcow2
	I1008 11:06:59.802185    9440 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:06:59.802195    9440 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:06:59.802223    9440 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:19:12:7b:f9:34 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/calico-446000/disk.qcow2
	I1008 11:06:59.804049    9440 main.go:141] libmachine: STDOUT: 
	I1008 11:06:59.804064    9440 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:06:59.804075    9440 client.go:171] duration metric: took 230.999792ms to LocalClient.Create
	I1008 11:07:01.806217    9440 start.go:128] duration metric: took 2.294671542s to createHost
	I1008 11:07:01.806282    9440 start.go:83] releasing machines lock for "calico-446000", held for 2.295132875s
	W1008 11:07:01.806646    9440 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p calico-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:07:01.819280    9440 out.go:201] 
	W1008 11:07:01.823354    9440 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:07:01.823380    9440 out.go:270] * 
	* 
	W1008 11:07:01.826897    9440 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:07:01.834255    9440 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.862629459s)

                                                
                                                
-- stdout --
	* [custom-flannel-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-446000" primary control-plane node in "custom-flannel-446000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-446000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:07:04.460139    9557 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:07:04.460308    9557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:07:04.460312    9557 out.go:358] Setting ErrFile to fd 2...
	I1008 11:07:04.460314    9557 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:07:04.460460    9557 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:07:04.461628    9557 out.go:352] Setting JSON to false
	I1008 11:07:04.480614    9557 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5794,"bootTime":1728405030,"procs":566,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:07:04.480719    9557 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:07:04.486619    9557 out.go:177] * [custom-flannel-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:07:04.493490    9557 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:07:04.493550    9557 notify.go:220] Checking for updates...
	I1008 11:07:04.500632    9557 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:07:04.502097    9557 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:07:04.505586    9557 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:07:04.508638    9557 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:07:04.511617    9557 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:07:04.514998    9557 config.go:182] Loaded profile config "cert-expiration-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:07:04.515078    9557 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:07:04.515135    9557 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:07:04.519610    9557 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:07:04.526599    9557 start.go:297] selected driver: qemu2
	I1008 11:07:04.526610    9557 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:07:04.526616    9557 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:07:04.529150    9557 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:07:04.532606    9557 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:07:04.535710    9557 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:07:04.535739    9557 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1008 11:07:04.535749    9557 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1008 11:07:04.535786    9557 start.go:340] cluster config:
	{Name:custom-flannel-446000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:07:04.540770    9557 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:07:04.548644    9557 out.go:177] * Starting "custom-flannel-446000" primary control-plane node in "custom-flannel-446000" cluster
	I1008 11:07:04.552563    9557 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:07:04.552590    9557 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:07:04.552601    9557 cache.go:56] Caching tarball of preloaded images
	I1008 11:07:04.552698    9557 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:07:04.552704    9557 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:07:04.552776    9557 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/custom-flannel-446000/config.json ...
	I1008 11:07:04.552787    9557 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/custom-flannel-446000/config.json: {Name:mk491c8141444e64127427204a9a57a90a5b7d0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:07:04.553171    9557 start.go:360] acquireMachinesLock for custom-flannel-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:07:04.553227    9557 start.go:364] duration metric: took 50.208µs to acquireMachinesLock for "custom-flannel-446000"
	I1008 11:07:04.553239    9557 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:07:04.553286    9557 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:07:04.557671    9557 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:07:04.575300    9557 start.go:159] libmachine.API.Create for "custom-flannel-446000" (driver="qemu2")
	I1008 11:07:04.575327    9557 client.go:168] LocalClient.Create starting
	I1008 11:07:04.575400    9557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:07:04.575439    9557 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:04.575452    9557 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:04.575498    9557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:07:04.575528    9557 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:04.575537    9557 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:04.576029    9557 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:07:04.720170    9557 main.go:141] libmachine: Creating SSH key...
	I1008 11:07:04.774893    9557 main.go:141] libmachine: Creating Disk image...
	I1008 11:07:04.774906    9557 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:07:04.775106    9557 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/disk.qcow2
	I1008 11:07:04.785002    9557 main.go:141] libmachine: STDOUT: 
	I1008 11:07:04.785020    9557 main.go:141] libmachine: STDERR: 
	I1008 11:07:04.785081    9557 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/disk.qcow2 +20000M
	I1008 11:07:04.793738    9557 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:07:04.793763    9557 main.go:141] libmachine: STDERR: 
	I1008 11:07:04.793782    9557 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/disk.qcow2
	I1008 11:07:04.793787    9557 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:07:04.793799    9557 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:07:04.793827    9557 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:df:1c:ac:dd:e7 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/disk.qcow2
	I1008 11:07:04.795602    9557 main.go:141] libmachine: STDOUT: 
	I1008 11:07:04.795615    9557 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:07:04.795633    9557 client.go:171] duration metric: took 220.302584ms to LocalClient.Create
	I1008 11:07:06.797816    9557 start.go:128] duration metric: took 2.244532042s to createHost
	I1008 11:07:06.797905    9557 start.go:83] releasing machines lock for "custom-flannel-446000", held for 2.244706125s
	W1008 11:07:06.798007    9557 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:07:06.813337    9557 out.go:177] * Deleting "custom-flannel-446000" in qemu2 ...
	W1008 11:07:06.838943    9557 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:07:06.838972    9557 start.go:729] Will try again in 5 seconds ...
	I1008 11:07:11.841107    9557 start.go:360] acquireMachinesLock for custom-flannel-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:07:11.841835    9557 start.go:364] duration metric: took 584.875µs to acquireMachinesLock for "custom-flannel-446000"
	I1008 11:07:11.841971    9557 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.31.1 ClusterName:custom-flannel-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:07:11.842200    9557 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:07:11.847709    9557 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:07:11.898479    9557 start.go:159] libmachine.API.Create for "custom-flannel-446000" (driver="qemu2")
	I1008 11:07:11.898534    9557 client.go:168] LocalClient.Create starting
	I1008 11:07:11.898682    9557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:07:11.898772    9557 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:11.898791    9557 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:11.898862    9557 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:07:11.898934    9557 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:11.898948    9557 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:11.899591    9557 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:07:12.059212    9557 main.go:141] libmachine: Creating SSH key...
	I1008 11:07:12.224908    9557 main.go:141] libmachine: Creating Disk image...
	I1008 11:07:12.224915    9557 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:07:12.225128    9557 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/disk.qcow2
	I1008 11:07:12.235456    9557 main.go:141] libmachine: STDOUT: 
	I1008 11:07:12.235469    9557 main.go:141] libmachine: STDERR: 
	I1008 11:07:12.235544    9557 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/disk.qcow2 +20000M
	I1008 11:07:12.243926    9557 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:07:12.243940    9557 main.go:141] libmachine: STDERR: 
	I1008 11:07:12.243954    9557 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/disk.qcow2
	I1008 11:07:12.243963    9557 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:07:12.243976    9557 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:07:12.244002    9557 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=3a:78:c3:d8:42:5d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/custom-flannel-446000/disk.qcow2
	I1008 11:07:12.245875    9557 main.go:141] libmachine: STDOUT: 
	I1008 11:07:12.245888    9557 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:07:12.245900    9557 client.go:171] duration metric: took 347.36625ms to LocalClient.Create
	I1008 11:07:14.248038    9557 start.go:128] duration metric: took 2.405847s to createHost
	I1008 11:07:14.248100    9557 start.go:83] releasing machines lock for "custom-flannel-446000", held for 2.406274291s
	W1008 11:07:14.248483    9557 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:07:14.261137    9557 out.go:201] 
	W1008 11:07:14.264171    9557 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:07:14.264237    9557 out.go:270] * 
	* 
	W1008 11:07:14.267445    9557 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:07:14.277049    9557 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.951023084s)

                                                
                                                
-- stdout --
	* [false-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-446000" primary control-plane node in "false-446000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-446000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:07:16.860659    9674 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:07:16.860798    9674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:07:16.860801    9674 out.go:358] Setting ErrFile to fd 2...
	I1008 11:07:16.860804    9674 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:07:16.860955    9674 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:07:16.862133    9674 out.go:352] Setting JSON to false
	I1008 11:07:16.880012    9674 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5806,"bootTime":1728405030,"procs":566,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:07:16.880092    9674 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:07:16.885537    9674 out.go:177] * [false-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:07:16.891514    9674 notify.go:220] Checking for updates...
	I1008 11:07:16.895321    9674 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:07:16.898461    9674 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:07:16.902492    9674 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:07:16.905416    9674 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:07:16.908473    9674 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:07:16.911476    9674 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:07:16.913281    9674 config.go:182] Loaded profile config "cert-expiration-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:07:16.913358    9674 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:07:16.913408    9674 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:07:16.917461    9674 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:07:16.924346    9674 start.go:297] selected driver: qemu2
	I1008 11:07:16.924356    9674 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:07:16.924363    9674 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:07:16.926875    9674 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:07:16.930472    9674 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:07:16.933599    9674 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:07:16.933625    9674 cni.go:84] Creating CNI manager for "false"
	I1008 11:07:16.933658    9674 start.go:340] cluster config:
	{Name:false-446000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:false-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:07:16.938288    9674 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:07:16.946461    9674 out.go:177] * Starting "false-446000" primary control-plane node in "false-446000" cluster
	I1008 11:07:16.950580    9674 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:07:16.950595    9674 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:07:16.950611    9674 cache.go:56] Caching tarball of preloaded images
	I1008 11:07:16.950687    9674 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:07:16.950693    9674 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:07:16.950762    9674 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/false-446000/config.json ...
	I1008 11:07:16.950772    9674 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/false-446000/config.json: {Name:mk0886ed6c839e2880b06d4ce0be3fe5418f85f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:07:16.951108    9674 start.go:360] acquireMachinesLock for false-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:07:16.951155    9674 start.go:364] duration metric: took 41.5µs to acquireMachinesLock for "false-446000"
	I1008 11:07:16.951168    9674 start.go:93] Provisioning new machine with config: &{Name:false-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:07:16.951194    9674 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:07:16.955478    9674 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:07:16.972432    9674 start.go:159] libmachine.API.Create for "false-446000" (driver="qemu2")
	I1008 11:07:16.972463    9674 client.go:168] LocalClient.Create starting
	I1008 11:07:16.972529    9674 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:07:16.972565    9674 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:16.972578    9674 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:16.972617    9674 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:07:16.972650    9674 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:16.972659    9674 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:16.973097    9674 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:07:17.118190    9674 main.go:141] libmachine: Creating SSH key...
	I1008 11:07:17.147041    9674 main.go:141] libmachine: Creating Disk image...
	I1008 11:07:17.147046    9674 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:07:17.147236    9674 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/disk.qcow2
	I1008 11:07:17.157040    9674 main.go:141] libmachine: STDOUT: 
	I1008 11:07:17.157064    9674 main.go:141] libmachine: STDERR: 
	I1008 11:07:17.157120    9674 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/disk.qcow2 +20000M
	I1008 11:07:17.165445    9674 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:07:17.165457    9674 main.go:141] libmachine: STDERR: 
	I1008 11:07:17.165482    9674 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/disk.qcow2
	I1008 11:07:17.165486    9674 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:07:17.165498    9674 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:07:17.165521    9674 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:42:25:f3:ef:d8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/disk.qcow2
	I1008 11:07:17.167294    9674 main.go:141] libmachine: STDOUT: 
	I1008 11:07:17.167307    9674 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:07:17.167328    9674 client.go:171] duration metric: took 194.861958ms to LocalClient.Create
	I1008 11:07:19.167968    9674 start.go:128] duration metric: took 2.216785084s to createHost
	I1008 11:07:19.168048    9674 start.go:83] releasing machines lock for "false-446000", held for 2.216921375s
	W1008 11:07:19.168105    9674 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:07:19.181074    9674 out.go:177] * Deleting "false-446000" in qemu2 ...
	W1008 11:07:19.205291    9674 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:07:19.205319    9674 start.go:729] Will try again in 5 seconds ...
	I1008 11:07:24.207474    9674 start.go:360] acquireMachinesLock for false-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:07:24.207952    9674 start.go:364] duration metric: took 388.542µs to acquireMachinesLock for "false-446000"
	I1008 11:07:24.208058    9674 start.go:93] Provisioning new machine with config: &{Name:false-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.1 ClusterName:false-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:07:24.208303    9674 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:07:24.221055    9674 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:07:24.271344    9674 start.go:159] libmachine.API.Create for "false-446000" (driver="qemu2")
	I1008 11:07:24.271399    9674 client.go:168] LocalClient.Create starting
	I1008 11:07:24.271575    9674 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:07:24.271679    9674 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:24.271700    9674 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:24.271780    9674 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:07:24.271840    9674 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:24.271854    9674 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:24.272512    9674 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:07:24.430487    9674 main.go:141] libmachine: Creating SSH key...
	I1008 11:07:24.710571    9674 main.go:141] libmachine: Creating Disk image...
	I1008 11:07:24.710582    9674 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:07:24.710821    9674 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/disk.qcow2
	I1008 11:07:24.721292    9674 main.go:141] libmachine: STDOUT: 
	I1008 11:07:24.721313    9674 main.go:141] libmachine: STDERR: 
	I1008 11:07:24.721375    9674 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/disk.qcow2 +20000M
	I1008 11:07:24.730043    9674 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:07:24.730063    9674 main.go:141] libmachine: STDERR: 
	I1008 11:07:24.730076    9674 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/disk.qcow2
	I1008 11:07:24.730081    9674 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:07:24.730088    9674 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:07:24.730126    9674 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:e7:92:33:58:08 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/false-446000/disk.qcow2
	I1008 11:07:24.731921    9674 main.go:141] libmachine: STDOUT: 
	I1008 11:07:24.731937    9674 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:07:24.731951    9674 client.go:171] duration metric: took 460.553917ms to LocalClient.Create
	I1008 11:07:26.734162    9674 start.go:128] duration metric: took 2.525870959s to createHost
	I1008 11:07:26.734257    9674 start.go:83] releasing machines lock for "false-446000", held for 2.526325s
	W1008 11:07:26.734639    9674 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p false-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:07:26.746289    9674 out.go:201] 
	W1008 11:07:26.750407    9674 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:07:26.750440    9674 out.go:270] * 
	* 
	W1008 11:07:26.752852    9674 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:07:26.763274    9674 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (10.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (10.021927417s)

                                                
                                                
-- stdout --
	* [enable-default-cni-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-446000" primary control-plane node in "enable-default-cni-446000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-446000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:07:29.085832    9783 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:07:29.085997    9783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:07:29.086000    9783 out.go:358] Setting ErrFile to fd 2...
	I1008 11:07:29.086003    9783 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:07:29.086135    9783 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:07:29.087280    9783 out.go:352] Setting JSON to false
	I1008 11:07:29.105017    9783 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5819,"bootTime":1728405030,"procs":562,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:07:29.105083    9783 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:07:29.111235    9783 out.go:177] * [enable-default-cni-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:07:29.117159    9783 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:07:29.117207    9783 notify.go:220] Checking for updates...
	I1008 11:07:29.124124    9783 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:07:29.127151    9783 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:07:29.131097    9783 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:07:29.134097    9783 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:07:29.137182    9783 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:07:29.140517    9783 config.go:182] Loaded profile config "cert-expiration-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:07:29.140596    9783 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:07:29.140656    9783 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:07:29.145104    9783 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:07:29.152178    9783 start.go:297] selected driver: qemu2
	I1008 11:07:29.152186    9783 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:07:29.152200    9783 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:07:29.154670    9783 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:07:29.158049    9783 out.go:177] * Automatically selected the socket_vmnet network
	E1008 11:07:29.161196    9783 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1008 11:07:29.161209    9783 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:07:29.161236    9783 cni.go:84] Creating CNI manager for "bridge"
	I1008 11:07:29.161243    9783 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 11:07:29.161277    9783 start.go:340] cluster config:
	{Name:enable-default-cni-446000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:07:29.166133    9783 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:07:29.173143    9783 out.go:177] * Starting "enable-default-cni-446000" primary control-plane node in "enable-default-cni-446000" cluster
	I1008 11:07:29.177110    9783 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:07:29.177126    9783 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:07:29.177139    9783 cache.go:56] Caching tarball of preloaded images
	I1008 11:07:29.177231    9783 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:07:29.177237    9783 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:07:29.177308    9783 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/enable-default-cni-446000/config.json ...
	I1008 11:07:29.177319    9783 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/enable-default-cni-446000/config.json: {Name:mk2ac4c6f87832c07cd31cb0fe95df2e791df3da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:07:29.177682    9783 start.go:360] acquireMachinesLock for enable-default-cni-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:07:29.177735    9783 start.go:364] duration metric: took 45.083µs to acquireMachinesLock for "enable-default-cni-446000"
	I1008 11:07:29.177746    9783 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:07:29.177777    9783 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:07:29.182155    9783 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:07:29.200147    9783 start.go:159] libmachine.API.Create for "enable-default-cni-446000" (driver="qemu2")
	I1008 11:07:29.200184    9783 client.go:168] LocalClient.Create starting
	I1008 11:07:29.200258    9783 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:07:29.200304    9783 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:29.200315    9783 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:29.200367    9783 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:07:29.200398    9783 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:29.200406    9783 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:29.200834    9783 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:07:29.345160    9783 main.go:141] libmachine: Creating SSH key...
	I1008 11:07:29.631318    9783 main.go:141] libmachine: Creating Disk image...
	I1008 11:07:29.631332    9783 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:07:29.631582    9783 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/disk.qcow2
	I1008 11:07:29.642059    9783 main.go:141] libmachine: STDOUT: 
	I1008 11:07:29.642081    9783 main.go:141] libmachine: STDERR: 
	I1008 11:07:29.642152    9783 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/disk.qcow2 +20000M
	I1008 11:07:29.650705    9783 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:07:29.650730    9783 main.go:141] libmachine: STDERR: 
	I1008 11:07:29.650747    9783 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/disk.qcow2
	I1008 11:07:29.650753    9783 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:07:29.650764    9783 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:07:29.650792    9783 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:e5:95:d7:9d:09 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/disk.qcow2
	I1008 11:07:29.652669    9783 main.go:141] libmachine: STDOUT: 
	I1008 11:07:29.652682    9783 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:07:29.652701    9783 client.go:171] duration metric: took 452.520333ms to LocalClient.Create
	I1008 11:07:31.653880    9783 start.go:128] duration metric: took 2.476106166s to createHost
	I1008 11:07:31.653949    9783 start.go:83] releasing machines lock for "enable-default-cni-446000", held for 2.476247625s
	W1008 11:07:31.654057    9783 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:07:31.664144    9783 out.go:177] * Deleting "enable-default-cni-446000" in qemu2 ...
	W1008 11:07:31.694151    9783 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:07:31.694174    9783 start.go:729] Will try again in 5 seconds ...
	I1008 11:07:36.696317    9783 start.go:360] acquireMachinesLock for enable-default-cni-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:07:36.696792    9783 start.go:364] duration metric: took 404.459µs to acquireMachinesLock for "enable-default-cni-446000"
	I1008 11:07:36.696890    9783 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:enable-default-cni-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:07:36.697139    9783 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:07:36.706801    9783 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:07:36.756052    9783 start.go:159] libmachine.API.Create for "enable-default-cni-446000" (driver="qemu2")
	I1008 11:07:36.756108    9783 client.go:168] LocalClient.Create starting
	I1008 11:07:36.756251    9783 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:07:36.756343    9783 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:36.756359    9783 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:36.756423    9783 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:07:36.756480    9783 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:36.756497    9783 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:36.757162    9783 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:07:36.914912    9783 main.go:141] libmachine: Creating SSH key...
	I1008 11:07:37.008734    9783 main.go:141] libmachine: Creating Disk image...
	I1008 11:07:37.008740    9783 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:07:37.008930    9783 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/disk.qcow2
	I1008 11:07:37.018994    9783 main.go:141] libmachine: STDOUT: 
	I1008 11:07:37.019014    9783 main.go:141] libmachine: STDERR: 
	I1008 11:07:37.019076    9783 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/disk.qcow2 +20000M
	I1008 11:07:37.027572    9783 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:07:37.027590    9783 main.go:141] libmachine: STDERR: 
	I1008 11:07:37.027602    9783 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/disk.qcow2
	I1008 11:07:37.027607    9783 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:07:37.027627    9783 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:07:37.027653    9783 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:90:d3:30:34:61 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/enable-default-cni-446000/disk.qcow2
	I1008 11:07:37.029412    9783 main.go:141] libmachine: STDOUT: 
	I1008 11:07:37.029452    9783 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:07:37.029464    9783 client.go:171] duration metric: took 273.355541ms to LocalClient.Create
	I1008 11:07:39.031601    9783 start.go:128] duration metric: took 2.334446541s to createHost
	I1008 11:07:39.031653    9783 start.go:83] releasing machines lock for "enable-default-cni-446000", held for 2.334875541s
	W1008 11:07:39.032059    9783 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:07:39.044662    9783 out.go:201] 
	W1008 11:07:39.048647    9783 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:07:39.048689    9783 out.go:270] * 
	* 
	W1008 11:07:39.056827    9783 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:07:39.062966    9783 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (10.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.776195916s)

                                                
                                                
-- stdout --
	* [flannel-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-446000" primary control-plane node in "flannel-446000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-446000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:07:41.366171    9895 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:07:41.366319    9895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:07:41.366322    9895 out.go:358] Setting ErrFile to fd 2...
	I1008 11:07:41.366325    9895 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:07:41.366461    9895 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:07:41.367615    9895 out.go:352] Setting JSON to false
	I1008 11:07:41.385680    9895 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5831,"bootTime":1728405030,"procs":565,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:07:41.385749    9895 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:07:41.391752    9895 out.go:177] * [flannel-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:07:41.398565    9895 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:07:41.398667    9895 notify.go:220] Checking for updates...
	I1008 11:07:41.405700    9895 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:07:41.407095    9895 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:07:41.409671    9895 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:07:41.412663    9895 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:07:41.415756    9895 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:07:41.419157    9895 config.go:182] Loaded profile config "cert-expiration-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:07:41.419233    9895 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:07:41.419281    9895 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:07:41.423716    9895 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:07:41.430653    9895 start.go:297] selected driver: qemu2
	I1008 11:07:41.430662    9895 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:07:41.430668    9895 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:07:41.433219    9895 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:07:41.436681    9895 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:07:41.439816    9895 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:07:41.439844    9895 cni.go:84] Creating CNI manager for "flannel"
	I1008 11:07:41.439853    9895 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I1008 11:07:41.439888    9895 start.go:340] cluster config:
	{Name:flannel-446000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:flannel-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:07:41.444595    9895 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:07:41.452737    9895 out.go:177] * Starting "flannel-446000" primary control-plane node in "flannel-446000" cluster
	I1008 11:07:41.456716    9895 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:07:41.456742    9895 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:07:41.456751    9895 cache.go:56] Caching tarball of preloaded images
	I1008 11:07:41.456834    9895 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:07:41.456840    9895 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:07:41.456907    9895 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/flannel-446000/config.json ...
	I1008 11:07:41.456917    9895 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/flannel-446000/config.json: {Name:mk26324e2d7a0a795fdbce2ea6ac04fad21cdf86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:07:41.457251    9895 start.go:360] acquireMachinesLock for flannel-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:07:41.457301    9895 start.go:364] duration metric: took 41.833µs to acquireMachinesLock for "flannel-446000"
	I1008 11:07:41.457312    9895 start.go:93] Provisioning new machine with config: &{Name:flannel-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:07:41.457342    9895 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:07:41.460757    9895 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:07:41.478096    9895 start.go:159] libmachine.API.Create for "flannel-446000" (driver="qemu2")
	I1008 11:07:41.478121    9895 client.go:168] LocalClient.Create starting
	I1008 11:07:41.478197    9895 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:07:41.478233    9895 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:41.478247    9895 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:41.478297    9895 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:07:41.478326    9895 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:41.478337    9895 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:41.478738    9895 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:07:41.623556    9895 main.go:141] libmachine: Creating SSH key...
	I1008 11:07:41.676044    9895 main.go:141] libmachine: Creating Disk image...
	I1008 11:07:41.676050    9895 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:07:41.676252    9895 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/disk.qcow2
	I1008 11:07:41.686208    9895 main.go:141] libmachine: STDOUT: 
	I1008 11:07:41.686231    9895 main.go:141] libmachine: STDERR: 
	I1008 11:07:41.686283    9895 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/disk.qcow2 +20000M
	I1008 11:07:41.695055    9895 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:07:41.695070    9895 main.go:141] libmachine: STDERR: 
	I1008 11:07:41.695087    9895 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/disk.qcow2
	I1008 11:07:41.695093    9895 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:07:41.695105    9895 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:07:41.695135    9895 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4e:7d:31:29:f5:9d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/disk.qcow2
	I1008 11:07:41.696972    9895 main.go:141] libmachine: STDOUT: 
	I1008 11:07:41.696991    9895 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:07:41.697010    9895 client.go:171] duration metric: took 218.885708ms to LocalClient.Create
	I1008 11:07:43.699163    9895 start.go:128] duration metric: took 2.241842541s to createHost
	I1008 11:07:43.699233    9895 start.go:83] releasing machines lock for "flannel-446000", held for 2.241961125s
	W1008 11:07:43.699293    9895 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:07:43.708446    9895 out.go:177] * Deleting "flannel-446000" in qemu2 ...
	W1008 11:07:43.739001    9895 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:07:43.739029    9895 start.go:729] Will try again in 5 seconds ...
	I1008 11:07:48.741205    9895 start.go:360] acquireMachinesLock for flannel-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:07:48.741680    9895 start.go:364] duration metric: took 395.5µs to acquireMachinesLock for "flannel-446000"
	I1008 11:07:48.741806    9895 start.go:93] Provisioning new machine with config: &{Name:flannel-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:flannel-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:07:48.742095    9895 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:07:48.748787    9895 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:07:48.798405    9895 start.go:159] libmachine.API.Create for "flannel-446000" (driver="qemu2")
	I1008 11:07:48.798459    9895 client.go:168] LocalClient.Create starting
	I1008 11:07:48.798618    9895 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:07:48.798693    9895 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:48.798710    9895 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:48.798787    9895 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:07:48.798842    9895 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:48.798857    9895 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:48.799605    9895 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:07:48.956476    9895 main.go:141] libmachine: Creating SSH key...
	I1008 11:07:49.042853    9895 main.go:141] libmachine: Creating Disk image...
	I1008 11:07:49.042858    9895 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:07:49.043065    9895 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/disk.qcow2
	I1008 11:07:49.053147    9895 main.go:141] libmachine: STDOUT: 
	I1008 11:07:49.053163    9895 main.go:141] libmachine: STDERR: 
	I1008 11:07:49.053237    9895 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/disk.qcow2 +20000M
	I1008 11:07:49.061758    9895 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:07:49.061775    9895 main.go:141] libmachine: STDERR: 
	I1008 11:07:49.061787    9895 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/disk.qcow2
	I1008 11:07:49.061792    9895 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:07:49.061804    9895 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:07:49.061850    9895 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ee:e6:c1:e2:39:6b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/flannel-446000/disk.qcow2
	I1008 11:07:49.063728    9895 main.go:141] libmachine: STDOUT: 
	I1008 11:07:49.063743    9895 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:07:49.063756    9895 client.go:171] duration metric: took 265.296625ms to LocalClient.Create
	I1008 11:07:51.065893    9895 start.go:128] duration metric: took 2.323811709s to createHost
	I1008 11:07:51.066000    9895 start.go:83] releasing machines lock for "flannel-446000", held for 2.324303s
	W1008 11:07:51.066371    9895 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p flannel-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:07:51.078980    9895 out.go:201] 
	W1008 11:07:51.083083    9895 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:07:51.083113    9895 out.go:270] * 
	* 
	W1008 11:07:51.085699    9895 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:07:51.094951    9895 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.830594458s)

                                                
                                                
-- stdout --
	* [bridge-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-446000" primary control-plane node in "bridge-446000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-446000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:07:53.617946   10012 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:07:53.618090   10012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:07:53.618093   10012 out.go:358] Setting ErrFile to fd 2...
	I1008 11:07:53.618096   10012 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:07:53.618230   10012 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:07:53.619384   10012 out.go:352] Setting JSON to false
	I1008 11:07:53.637453   10012 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5843,"bootTime":1728405030,"procs":565,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:07:53.637543   10012 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:07:53.642218   10012 out.go:177] * [bridge-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:07:53.649217   10012 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:07:53.649252   10012 notify.go:220] Checking for updates...
	I1008 11:07:53.656155   10012 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:07:53.659192   10012 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:07:53.662175   10012 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:07:53.665303   10012 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:07:53.668173   10012 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:07:53.671507   10012 config.go:182] Loaded profile config "cert-expiration-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:07:53.671584   10012 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:07:53.671638   10012 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:07:53.676156   10012 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:07:53.682187   10012 start.go:297] selected driver: qemu2
	I1008 11:07:53.682194   10012 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:07:53.682202   10012 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:07:53.684688   10012 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:07:53.688135   10012 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:07:53.691243   10012 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:07:53.691276   10012 cni.go:84] Creating CNI manager for "bridge"
	I1008 11:07:53.691280   10012 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 11:07:53.691316   10012 start.go:340] cluster config:
	{Name:bridge-446000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:bridge-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:07:53.695889   10012 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:07:53.704168   10012 out.go:177] * Starting "bridge-446000" primary control-plane node in "bridge-446000" cluster
	I1008 11:07:53.708167   10012 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:07:53.708183   10012 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:07:53.708194   10012 cache.go:56] Caching tarball of preloaded images
	I1008 11:07:53.708271   10012 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:07:53.708277   10012 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:07:53.708355   10012 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/bridge-446000/config.json ...
	I1008 11:07:53.708367   10012 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/bridge-446000/config.json: {Name:mk2722a1526237234af347097baf497d8ec4f190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:07:53.708748   10012 start.go:360] acquireMachinesLock for bridge-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:07:53.708800   10012 start.go:364] duration metric: took 45.834µs to acquireMachinesLock for "bridge-446000"
	I1008 11:07:53.708813   10012 start.go:93] Provisioning new machine with config: &{Name:bridge-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:07:53.708850   10012 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:07:53.713178   10012 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:07:53.731132   10012 start.go:159] libmachine.API.Create for "bridge-446000" (driver="qemu2")
	I1008 11:07:53.731155   10012 client.go:168] LocalClient.Create starting
	I1008 11:07:53.731221   10012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:07:53.731258   10012 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:53.731272   10012 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:53.731313   10012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:07:53.731344   10012 main.go:141] libmachine: Decoding PEM data...
	I1008 11:07:53.731356   10012 main.go:141] libmachine: Parsing certificate...
	I1008 11:07:53.731840   10012 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:07:53.877349   10012 main.go:141] libmachine: Creating SSH key...
	I1008 11:07:53.963197   10012 main.go:141] libmachine: Creating Disk image...
	I1008 11:07:53.963204   10012 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:07:53.963393   10012 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/disk.qcow2
	I1008 11:07:53.973074   10012 main.go:141] libmachine: STDOUT: 
	I1008 11:07:53.973095   10012 main.go:141] libmachine: STDERR: 
	I1008 11:07:53.973166   10012 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/disk.qcow2 +20000M
	I1008 11:07:53.981575   10012 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:07:53.981591   10012 main.go:141] libmachine: STDERR: 
	I1008 11:07:53.981605   10012 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/disk.qcow2
	I1008 11:07:53.981614   10012 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:07:53.981630   10012 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:07:53.981661   10012 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=1a:c8:d5:ed:44:4e -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/disk.qcow2
	I1008 11:07:53.983475   10012 main.go:141] libmachine: STDOUT: 
	I1008 11:07:53.983489   10012 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:07:53.983508   10012 client.go:171] duration metric: took 252.351125ms to LocalClient.Create
	I1008 11:07:55.985714   10012 start.go:128] duration metric: took 2.2768705s to createHost
	I1008 11:07:55.985796   10012 start.go:83] releasing machines lock for "bridge-446000", held for 2.277024084s
	W1008 11:07:55.985849   10012 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:07:55.998927   10012 out.go:177] * Deleting "bridge-446000" in qemu2 ...
	W1008 11:07:56.025474   10012 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:07:56.025524   10012 start.go:729] Will try again in 5 seconds ...
	I1008 11:08:01.025654   10012 start.go:360] acquireMachinesLock for bridge-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:08:01.026220   10012 start.go:364] duration metric: took 463.666µs to acquireMachinesLock for "bridge-446000"
	I1008 11:08:01.026325   10012 start.go:93] Provisioning new machine with config: &{Name:bridge-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.31.1 ClusterName:bridge-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:08:01.026624   10012 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:08:01.037845   10012 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:08:01.083779   10012 start.go:159] libmachine.API.Create for "bridge-446000" (driver="qemu2")
	I1008 11:08:01.083849   10012 client.go:168] LocalClient.Create starting
	I1008 11:08:01.084034   10012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:08:01.084127   10012 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:01.084147   10012 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:01.084224   10012 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:08:01.084280   10012 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:01.084306   10012 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:01.084922   10012 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:08:01.243883   10012 main.go:141] libmachine: Creating SSH key...
	I1008 11:08:01.349622   10012 main.go:141] libmachine: Creating Disk image...
	I1008 11:08:01.349627   10012 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:08:01.349826   10012 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/disk.qcow2
	I1008 11:08:01.359857   10012 main.go:141] libmachine: STDOUT: 
	I1008 11:08:01.359876   10012 main.go:141] libmachine: STDERR: 
	I1008 11:08:01.359926   10012 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/disk.qcow2 +20000M
	I1008 11:08:01.368363   10012 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:08:01.368379   10012 main.go:141] libmachine: STDERR: 
	I1008 11:08:01.368391   10012 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/disk.qcow2
	I1008 11:08:01.368396   10012 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:08:01.368404   10012 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:08:01.368441   10012 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=72:d6:8c:c7:c4:73 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/bridge-446000/disk.qcow2
	I1008 11:08:01.370244   10012 main.go:141] libmachine: STDOUT: 
	I1008 11:08:01.370258   10012 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:08:01.370270   10012 client.go:171] duration metric: took 286.40875ms to LocalClient.Create
	I1008 11:08:03.372404   10012 start.go:128] duration metric: took 2.345793584s to createHost
	I1008 11:08:03.372452   10012 start.go:83] releasing machines lock for "bridge-446000", held for 2.346244917s
	W1008 11:08:03.372832   10012 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p bridge-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:03.385435   10012 out.go:201] 
	W1008 11:08:03.389510   10012 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:08:03.389536   10012 out.go:270] * 
	* 
	W1008 11:08:03.392023   10012 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:08:03.402446   10012 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (9.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-446000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (9.877345875s)

                                                
                                                
-- stdout --
	* [kubenet-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-446000" primary control-plane node in "kubenet-446000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-446000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:08:05.789847   10124 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:08:05.790018   10124 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:05.790021   10124 out.go:358] Setting ErrFile to fd 2...
	I1008 11:08:05.790023   10124 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:05.790131   10124 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:08:05.791302   10124 out.go:352] Setting JSON to false
	I1008 11:08:05.809091   10124 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5855,"bootTime":1728405030,"procs":565,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:08:05.809162   10124 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:08:05.815435   10124 out.go:177] * [kubenet-446000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:08:05.824388   10124 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:08:05.824429   10124 notify.go:220] Checking for updates...
	I1008 11:08:05.831292   10124 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:08:05.834330   10124 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:08:05.837357   10124 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:08:05.840304   10124 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:08:05.843361   10124 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:08:05.854429   10124 config.go:182] Loaded profile config "cert-expiration-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:08:05.854513   10124 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:08:05.854569   10124 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:08:05.859249   10124 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:08:05.866384   10124 start.go:297] selected driver: qemu2
	I1008 11:08:05.866393   10124 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:08:05.866400   10124 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:08:05.868985   10124 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:08:05.872292   10124 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:08:05.875496   10124 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:08:05.875528   10124 cni.go:80] network plugin configured as "kubenet", returning disabled
	I1008 11:08:05.875575   10124 start.go:340] cluster config:
	{Name:kubenet-446000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:kubenet-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:08:05.880712   10124 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:05.889326   10124 out.go:177] * Starting "kubenet-446000" primary control-plane node in "kubenet-446000" cluster
	I1008 11:08:05.893343   10124 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:08:05.893364   10124 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:08:05.893376   10124 cache.go:56] Caching tarball of preloaded images
	I1008 11:08:05.893472   10124 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:08:05.893478   10124 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:08:05.893544   10124 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/kubenet-446000/config.json ...
	I1008 11:08:05.893556   10124 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/kubenet-446000/config.json: {Name:mkcd6ebf6fef07408ea4a60001ed90611b7fad43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:08:05.893925   10124 start.go:360] acquireMachinesLock for kubenet-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:08:05.893977   10124 start.go:364] duration metric: took 46µs to acquireMachinesLock for "kubenet-446000"
	I1008 11:08:05.893987   10124 start.go:93] Provisioning new machine with config: &{Name:kubenet-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:08:05.894038   10124 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:08:05.898385   10124 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:08:05.916305   10124 start.go:159] libmachine.API.Create for "kubenet-446000" (driver="qemu2")
	I1008 11:08:05.916329   10124 client.go:168] LocalClient.Create starting
	I1008 11:08:05.916401   10124 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:08:05.916442   10124 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:05.916454   10124 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:05.916503   10124 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:08:05.916533   10124 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:05.916540   10124 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:05.916921   10124 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:08:06.060631   10124 main.go:141] libmachine: Creating SSH key...
	I1008 11:08:06.205996   10124 main.go:141] libmachine: Creating Disk image...
	I1008 11:08:06.206005   10124 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:08:06.206213   10124 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/disk.qcow2
	I1008 11:08:06.216190   10124 main.go:141] libmachine: STDOUT: 
	I1008 11:08:06.216208   10124 main.go:141] libmachine: STDERR: 
	I1008 11:08:06.216271   10124 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/disk.qcow2 +20000M
	I1008 11:08:06.224636   10124 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:08:06.224659   10124 main.go:141] libmachine: STDERR: 
	I1008 11:08:06.224679   10124 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/disk.qcow2
	I1008 11:08:06.224685   10124 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:08:06.224695   10124 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:08:06.224726   10124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=b6:9e:3e:3a:d9:68 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/disk.qcow2
	I1008 11:08:06.226568   10124 main.go:141] libmachine: STDOUT: 
	I1008 11:08:06.226582   10124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:08:06.226600   10124 client.go:171] duration metric: took 310.271083ms to LocalClient.Create
	I1008 11:08:08.228811   10124 start.go:128] duration metric: took 2.334780208s to createHost
	I1008 11:08:08.228880   10124 start.go:83] releasing machines lock for "kubenet-446000", held for 2.334934125s
	W1008 11:08:08.228937   10124 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:08.242950   10124 out.go:177] * Deleting "kubenet-446000" in qemu2 ...
	W1008 11:08:08.266377   10124 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:08.266418   10124 start.go:729] Will try again in 5 seconds ...
	I1008 11:08:13.268587   10124 start.go:360] acquireMachinesLock for kubenet-446000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:08:13.269239   10124 start.go:364] duration metric: took 527.583µs to acquireMachinesLock for "kubenet-446000"
	I1008 11:08:13.269392   10124 start.go:93] Provisioning new machine with config: &{Name:kubenet-446000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:kubenet-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:08:13.269704   10124 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:08:13.283547   10124 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1008 11:08:13.333111   10124 start.go:159] libmachine.API.Create for "kubenet-446000" (driver="qemu2")
	I1008 11:08:13.333160   10124 client.go:168] LocalClient.Create starting
	I1008 11:08:13.333302   10124 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:08:13.333381   10124 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:13.333399   10124 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:13.333459   10124 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:08:13.333531   10124 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:13.333545   10124 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:13.334138   10124 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:08:13.490139   10124 main.go:141] libmachine: Creating SSH key...
	I1008 11:08:13.565760   10124 main.go:141] libmachine: Creating Disk image...
	I1008 11:08:13.565767   10124 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:08:13.565963   10124 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/disk.qcow2
	I1008 11:08:13.575688   10124 main.go:141] libmachine: STDOUT: 
	I1008 11:08:13.575706   10124 main.go:141] libmachine: STDERR: 
	I1008 11:08:13.575769   10124 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/disk.qcow2 +20000M
	I1008 11:08:13.584096   10124 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:08:13.584114   10124 main.go:141] libmachine: STDERR: 
	I1008 11:08:13.584136   10124 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/disk.qcow2
	I1008 11:08:13.584143   10124 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:08:13.584151   10124 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:08:13.584177   10124 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e2:32:3c:4e:24:0a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/kubenet-446000/disk.qcow2
	I1008 11:08:13.585928   10124 main.go:141] libmachine: STDOUT: 
	I1008 11:08:13.585944   10124 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:08:13.585955   10124 client.go:171] duration metric: took 252.793959ms to LocalClient.Create
	I1008 11:08:15.588093   10124 start.go:128] duration metric: took 2.318392458s to createHost
	I1008 11:08:15.588179   10124 start.go:83] releasing machines lock for "kubenet-446000", held for 2.318935083s
	W1008 11:08:15.589479   10124 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-446000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:15.602063   10124 out.go:201] 
	W1008 11:08:15.607128   10124 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:08:15.607173   10124 out.go:270] * 
	* 
	W1008 11:08:15.609790   10124 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:08:15.620073   10124 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (9.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-919000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-919000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.783885958s)

                                                
                                                
-- stdout --
	* [old-k8s-version-919000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-919000" primary control-plane node in "old-k8s-version-919000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-919000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:08:18.003948   10243 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:08:18.004103   10243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:18.004106   10243 out.go:358] Setting ErrFile to fd 2...
	I1008 11:08:18.004109   10243 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:18.004243   10243 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:08:18.005417   10243 out.go:352] Setting JSON to false
	I1008 11:08:18.023507   10243 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5868,"bootTime":1728405030,"procs":575,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:08:18.023575   10243 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:08:18.028103   10243 out.go:177] * [old-k8s-version-919000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:08:18.035121   10243 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:08:18.035180   10243 notify.go:220] Checking for updates...
	I1008 11:08:18.042062   10243 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:08:18.045070   10243 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:08:18.047958   10243 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:08:18.051003   10243 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:08:18.054090   10243 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:08:18.055745   10243 config.go:182] Loaded profile config "cert-expiration-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:08:18.055821   10243 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:08:18.055889   10243 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:08:18.060042   10243 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:08:18.066903   10243 start.go:297] selected driver: qemu2
	I1008 11:08:18.066909   10243 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:08:18.066914   10243 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:08:18.069332   10243 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:08:18.072041   10243 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:08:18.075187   10243 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:08:18.075211   10243 cni.go:84] Creating CNI manager for ""
	I1008 11:08:18.075234   10243 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1008 11:08:18.075269   10243 start.go:340] cluster config:
	{Name:old-k8s-version-919000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:08:18.079871   10243 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:18.087989   10243 out.go:177] * Starting "old-k8s-version-919000" primary control-plane node in "old-k8s-version-919000" cluster
	I1008 11:08:18.092040   10243 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1008 11:08:18.092057   10243 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1008 11:08:18.092067   10243 cache.go:56] Caching tarball of preloaded images
	I1008 11:08:18.092137   10243 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:08:18.092143   10243 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1008 11:08:18.092213   10243 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/old-k8s-version-919000/config.json ...
	I1008 11:08:18.092224   10243 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/old-k8s-version-919000/config.json: {Name:mk9235a44dcf6f433392c81ac99cdb7b2c2e66d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:08:18.092482   10243 start.go:360] acquireMachinesLock for old-k8s-version-919000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:08:18.092535   10243 start.go:364] duration metric: took 44.708µs to acquireMachinesLock for "old-k8s-version-919000"
	I1008 11:08:18.092546   10243 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-919000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:08:18.092577   10243 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:08:18.096090   10243 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 11:08:18.113514   10243 start.go:159] libmachine.API.Create for "old-k8s-version-919000" (driver="qemu2")
	I1008 11:08:18.113538   10243 client.go:168] LocalClient.Create starting
	I1008 11:08:18.113614   10243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:08:18.113655   10243 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:18.113667   10243 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:18.113707   10243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:08:18.113743   10243 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:18.113750   10243 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:18.114131   10243 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:08:18.258692   10243 main.go:141] libmachine: Creating SSH key...
	I1008 11:08:18.307889   10243 main.go:141] libmachine: Creating Disk image...
	I1008 11:08:18.307894   10243 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:08:18.308076   10243 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/disk.qcow2
	I1008 11:08:18.317941   10243 main.go:141] libmachine: STDOUT: 
	I1008 11:08:18.317962   10243 main.go:141] libmachine: STDERR: 
	I1008 11:08:18.318024   10243 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/disk.qcow2 +20000M
	I1008 11:08:18.326815   10243 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:08:18.326833   10243 main.go:141] libmachine: STDERR: 
	I1008 11:08:18.326849   10243 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/disk.qcow2
	I1008 11:08:18.326855   10243 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:08:18.326867   10243 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:08:18.326894   10243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:fb:c0:ac:88:f1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/disk.qcow2
	I1008 11:08:18.328851   10243 main.go:141] libmachine: STDOUT: 
	I1008 11:08:18.328866   10243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:08:18.328883   10243 client.go:171] duration metric: took 215.339542ms to LocalClient.Create
	I1008 11:08:20.330201   10243 start.go:128] duration metric: took 2.237642125s to createHost
	I1008 11:08:20.330336   10243 start.go:83] releasing machines lock for "old-k8s-version-919000", held for 2.237829041s
	W1008 11:08:20.330395   10243 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:20.343375   10243 out.go:177] * Deleting "old-k8s-version-919000" in qemu2 ...
	W1008 11:08:20.366765   10243 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:20.366791   10243 start.go:729] Will try again in 5 seconds ...
	I1008 11:08:25.368940   10243 start.go:360] acquireMachinesLock for old-k8s-version-919000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:08:25.369498   10243 start.go:364] duration metric: took 448.541µs to acquireMachinesLock for "old-k8s-version-919000"
	I1008 11:08:25.369606   10243 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-919000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:08:25.369906   10243 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:08:25.383668   10243 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 11:08:25.432856   10243 start.go:159] libmachine.API.Create for "old-k8s-version-919000" (driver="qemu2")
	I1008 11:08:25.432916   10243 client.go:168] LocalClient.Create starting
	I1008 11:08:25.433047   10243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:08:25.433122   10243 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:25.433136   10243 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:25.433195   10243 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:08:25.433249   10243 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:25.433261   10243 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:25.433898   10243 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:08:25.590065   10243 main.go:141] libmachine: Creating SSH key...
	I1008 11:08:25.692502   10243 main.go:141] libmachine: Creating Disk image...
	I1008 11:08:25.692508   10243 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:08:25.692680   10243 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/disk.qcow2
	I1008 11:08:25.702696   10243 main.go:141] libmachine: STDOUT: 
	I1008 11:08:25.702725   10243 main.go:141] libmachine: STDERR: 
	I1008 11:08:25.702779   10243 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/disk.qcow2 +20000M
	I1008 11:08:25.711240   10243 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:08:25.711255   10243 main.go:141] libmachine: STDERR: 
	I1008 11:08:25.711267   10243 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/disk.qcow2
	I1008 11:08:25.711282   10243 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:08:25.711290   10243 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:08:25.711321   10243 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:2b:5b:c7:82:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/disk.qcow2
	I1008 11:08:25.713068   10243 main.go:141] libmachine: STDOUT: 
	I1008 11:08:25.713085   10243 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:08:25.713099   10243 client.go:171] duration metric: took 280.181417ms to LocalClient.Create
	I1008 11:08:27.715282   10243 start.go:128] duration metric: took 2.345387542s to createHost
	I1008 11:08:27.715339   10243 start.go:83] releasing machines lock for "old-k8s-version-919000", held for 2.34585875s
	W1008 11:08:27.715680   10243 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-919000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-919000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:27.727340   10243 out.go:201] 
	W1008 11:08:27.730427   10243 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:08:27.730470   10243 out.go:270] * 
	* 
	W1008 11:08:27.733317   10243 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:08:27.742292   10243 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-919000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 7 (72.702458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-919000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-919000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-919000 create -f testdata/busybox.yaml: exit status 1 (28.737833ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-919000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-919000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 7 (34.517208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-919000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 7 (33.892917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-919000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-919000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-919000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-919000 describe deploy/metrics-server -n kube-system: exit status 1 (27.250875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-919000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-919000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 7 (34.748333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-919000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-919000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-919000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.19868975s)

                                                
                                                
-- stdout --
	* [old-k8s-version-919000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-919000" primary control-plane node in "old-k8s-version-919000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-919000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-919000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:08:31.777366   10291 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:08:31.777509   10291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:31.777512   10291 out.go:358] Setting ErrFile to fd 2...
	I1008 11:08:31.777515   10291 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:31.777629   10291 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:08:31.778741   10291 out.go:352] Setting JSON to false
	I1008 11:08:31.796641   10291 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5881,"bootTime":1728405030,"procs":574,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:08:31.796715   10291 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:08:31.801454   10291 out.go:177] * [old-k8s-version-919000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:08:31.808342   10291 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:08:31.808399   10291 notify.go:220] Checking for updates...
	I1008 11:08:31.816306   10291 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:08:31.819362   10291 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:08:31.822393   10291 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:08:31.825356   10291 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:08:31.828357   10291 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:08:31.831704   10291 config.go:182] Loaded profile config "old-k8s-version-919000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1008 11:08:31.835307   10291 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I1008 11:08:31.838361   10291 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:08:31.842414   10291 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 11:08:31.849336   10291 start.go:297] selected driver: qemu2
	I1008 11:08:31.849345   10291 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-919000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:08:31.849402   10291 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:08:31.851918   10291 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:08:31.851947   10291 cni.go:84] Creating CNI manager for ""
	I1008 11:08:31.851977   10291 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1008 11:08:31.852003   10291 start.go:340] cluster config:
	{Name:old-k8s-version-919000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-919000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:08:31.856887   10291 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:31.865345   10291 out.go:177] * Starting "old-k8s-version-919000" primary control-plane node in "old-k8s-version-919000" cluster
	I1008 11:08:31.868409   10291 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1008 11:08:31.868425   10291 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1008 11:08:31.868437   10291 cache.go:56] Caching tarball of preloaded images
	I1008 11:08:31.868514   10291 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:08:31.868519   10291 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1008 11:08:31.868588   10291 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/old-k8s-version-919000/config.json ...
	I1008 11:08:31.869053   10291 start.go:360] acquireMachinesLock for old-k8s-version-919000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:08:31.869084   10291 start.go:364] duration metric: took 24.541µs to acquireMachinesLock for "old-k8s-version-919000"
	I1008 11:08:31.869092   10291 start.go:96] Skipping create...Using existing machine configuration
	I1008 11:08:31.869096   10291 fix.go:54] fixHost starting: 
	I1008 11:08:31.869211   10291 fix.go:112] recreateIfNeeded on old-k8s-version-919000: state=Stopped err=<nil>
	W1008 11:08:31.869221   10291 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 11:08:31.872373   10291 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-919000" ...
	I1008 11:08:31.879358   10291 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:08:31.879402   10291 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:2b:5b:c7:82:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/disk.qcow2
	I1008 11:08:31.881576   10291 main.go:141] libmachine: STDOUT: 
	I1008 11:08:31.881596   10291 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:08:31.881633   10291 fix.go:56] duration metric: took 12.528666ms for fixHost
	I1008 11:08:31.881637   10291 start.go:83] releasing machines lock for "old-k8s-version-919000", held for 12.549209ms
	W1008 11:08:31.881645   10291 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:08:31.881692   10291 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:31.881697   10291 start.go:729] Will try again in 5 seconds ...
	I1008 11:08:36.883814   10291 start.go:360] acquireMachinesLock for old-k8s-version-919000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:08:36.884168   10291 start.go:364] duration metric: took 271.917µs to acquireMachinesLock for "old-k8s-version-919000"
	I1008 11:08:36.884276   10291 start.go:96] Skipping create...Using existing machine configuration
	I1008 11:08:36.884300   10291 fix.go:54] fixHost starting: 
	I1008 11:08:36.884964   10291 fix.go:112] recreateIfNeeded on old-k8s-version-919000: state=Stopped err=<nil>
	W1008 11:08:36.884996   10291 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 11:08:36.894502   10291 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-919000" ...
	I1008 11:08:36.898366   10291 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:08:36.898584   10291 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:2b:5b:c7:82:20 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/old-k8s-version-919000/disk.qcow2
	I1008 11:08:36.908469   10291 main.go:141] libmachine: STDOUT: 
	I1008 11:08:36.908524   10291 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:08:36.908580   10291 fix.go:56] duration metric: took 24.286291ms for fixHost
	I1008 11:08:36.908598   10291 start.go:83] releasing machines lock for "old-k8s-version-919000", held for 24.404958ms
	W1008 11:08:36.908748   10291 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-919000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-919000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:36.917430   10291 out.go:201] 
	W1008 11:08:36.921554   10291 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:08:36.921578   10291 out.go:270] * 
	* 
	W1008 11:08:36.923950   10291 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:08:36.930385   10291 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-919000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 7 (74.827291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-919000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-919000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 7 (35.654708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-919000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-919000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-919000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-919000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.200875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-919000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-919000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 7 (34.35375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-919000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-919000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 7 (33.930541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-919000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-919000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-919000 --alsologtostderr -v=1: exit status 83 (46.586916ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-919000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-919000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:08:37.229340   10310 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:08:37.229775   10310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:37.229779   10310 out.go:358] Setting ErrFile to fd 2...
	I1008 11:08:37.229781   10310 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:37.229937   10310 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:08:37.230169   10310 out.go:352] Setting JSON to false
	I1008 11:08:37.230176   10310 mustload.go:65] Loading cluster: old-k8s-version-919000
	I1008 11:08:37.230383   10310 config.go:182] Loaded profile config "old-k8s-version-919000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I1008 11:08:37.234649   10310 out.go:177] * The control-plane node old-k8s-version-919000 host is not running: state=Stopped
	I1008 11:08:37.238435   10310 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-919000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-919000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 7 (34.28025ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-919000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 7 (34.282542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-919000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (10.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-528000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-528000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.96760225s)

                                                
                                                
-- stdout --
	* [no-preload-528000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-528000" primary control-plane node in "no-preload-528000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-528000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:08:37.572996   10327 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:08:37.573161   10327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:37.573164   10327 out.go:358] Setting ErrFile to fd 2...
	I1008 11:08:37.573166   10327 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:37.573304   10327 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:08:37.574454   10327 out.go:352] Setting JSON to false
	I1008 11:08:37.592529   10327 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5887,"bootTime":1728405030,"procs":574,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:08:37.592602   10327 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:08:37.597593   10327 out.go:177] * [no-preload-528000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:08:37.609612   10327 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:08:37.609657   10327 notify.go:220] Checking for updates...
	I1008 11:08:37.616514   10327 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:08:37.619590   10327 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:08:37.622562   10327 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:08:37.625583   10327 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:08:37.628572   10327 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:08:37.631829   10327 config.go:182] Loaded profile config "cert-expiration-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:08:37.631892   10327 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:08:37.631947   10327 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:08:37.636536   10327 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:08:37.642512   10327 start.go:297] selected driver: qemu2
	I1008 11:08:37.642521   10327 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:08:37.642528   10327 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:08:37.645005   10327 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:08:37.648575   10327 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:08:37.651660   10327 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:08:37.651695   10327 cni.go:84] Creating CNI manager for ""
	I1008 11:08:37.651719   10327 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:08:37.651731   10327 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 11:08:37.651766   10327 start.go:340] cluster config:
	{Name:no-preload-528000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket
_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:08:37.656670   10327 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:37.664555   10327 out.go:177] * Starting "no-preload-528000" primary control-plane node in "no-preload-528000" cluster
	I1008 11:08:37.668519   10327 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:08:37.668625   10327 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/no-preload-528000/config.json ...
	I1008 11:08:37.668650   10327 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/no-preload-528000/config.json: {Name:mk024bbc9ce1fefe04929e03c7217d026e0a7767 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:08:37.668657   10327 cache.go:107] acquiring lock: {Name:mk5604f791a1ef2f4d9ad107fc168a2b664c55e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:37.668673   10327 cache.go:107] acquiring lock: {Name:mk169ecda7c5762f2a6b09160237e58058dfebe4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:37.668682   10327 cache.go:107] acquiring lock: {Name:mk61708dc1953192bfba7f02a71d61889b97d937 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:37.668779   10327 cache.go:115] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1008 11:08:37.668787   10327 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 137.042µs
	I1008 11:08:37.668790   10327 cache.go:107] acquiring lock: {Name:mk603d4059dbcab3b157b51b107fd27ae95068d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:37.668806   10327 cache.go:107] acquiring lock: {Name:mkb2058363884a7b469f1a96782a048c745ef061 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:37.668857   10327 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 11:08:37.668866   10327 cache.go:107] acquiring lock: {Name:mk88f9f64be3c6bde5d64477e3d400cafb434059 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:37.668894   10327 cache.go:107] acquiring lock: {Name:mk021b5bc5f0a0a0b8d69465a949c2245379398e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:37.668905   10327 cache.go:107] acquiring lock: {Name:mk4db5c21e695dd297bb579c67143a884341926b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:37.669022   10327 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1008 11:08:37.669029   10327 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 11:08:37.669041   10327 start.go:360] acquireMachinesLock for no-preload-528000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:08:37.669102   10327 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1008 11:08:37.669218   10327 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 11:08:37.669246   10327 start.go:364] duration metric: took 141.917µs to acquireMachinesLock for "no-preload-528000"
	I1008 11:08:37.669261   10327 start.go:93] Provisioning new machine with config: &{Name:no-preload-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:08:37.669313   10327 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:08:37.669426   10327 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 11:08:37.669431   10327 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1008 11:08:37.669480   10327 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 11:08:37.673519   10327 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 11:08:37.680286   10327 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.1
	I1008 11:08:37.680888   10327 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1008 11:08:37.680991   10327 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.1
	I1008 11:08:37.681028   10327 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.1
	I1008 11:08:37.682780   10327 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1008 11:08:37.682904   10327 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.1
	I1008 11:08:37.682994   10327 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1008 11:08:37.691322   10327 start.go:159] libmachine.API.Create for "no-preload-528000" (driver="qemu2")
	I1008 11:08:37.691349   10327 client.go:168] LocalClient.Create starting
	I1008 11:08:37.691428   10327 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:08:37.691464   10327 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:37.691475   10327 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:37.691514   10327 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:08:37.691543   10327 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:37.691551   10327 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:37.691910   10327 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:08:37.842900   10327 main.go:141] libmachine: Creating SSH key...
	I1008 11:08:38.050242   10327 main.go:141] libmachine: Creating Disk image...
	I1008 11:08:38.050258   10327 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:08:38.050483   10327 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/disk.qcow2
	I1008 11:08:38.061586   10327 main.go:141] libmachine: STDOUT: 
	I1008 11:08:38.061608   10327 main.go:141] libmachine: STDERR: 
	I1008 11:08:38.061668   10327 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/disk.qcow2 +20000M
	I1008 11:08:38.071085   10327 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:08:38.071105   10327 main.go:141] libmachine: STDERR: 
	I1008 11:08:38.071121   10327 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/disk.qcow2
	I1008 11:08:38.071126   10327 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:08:38.071139   10327 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:08:38.071168   10327 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:e5:80:4d:c8:6f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/disk.qcow2
	I1008 11:08:38.073211   10327 main.go:141] libmachine: STDOUT: 
	I1008 11:08:38.073228   10327 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:08:38.073244   10327 client.go:171] duration metric: took 381.898875ms to LocalClient.Create
	I1008 11:08:38.118493   10327 cache.go:162] opening:  /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1
	I1008 11:08:38.147893   10327 cache.go:162] opening:  /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10
	I1008 11:08:38.192435   10327 cache.go:162] opening:  /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1
	I1008 11:08:38.274187   10327 cache.go:162] opening:  /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1
	I1008 11:08:38.322359   10327 cache.go:157] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1008 11:08:38.322372   10327 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 653.633834ms
	I1008 11:08:38.322381   10327 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1008 11:08:38.345280   10327 cache.go:162] opening:  /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3
	I1008 11:08:38.369260   10327 cache.go:162] opening:  /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1
	I1008 11:08:38.430245   10327 cache.go:162] opening:  /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0
	I1008 11:08:40.073494   10327 start.go:128] duration metric: took 2.404185792s to createHost
	I1008 11:08:40.073551   10327 start.go:83] releasing machines lock for "no-preload-528000", held for 2.404332834s
	W1008 11:08:40.073603   10327 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:40.083866   10327 out.go:177] * Deleting "no-preload-528000" in qemu2 ...
	W1008 11:08:40.110132   10327 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:40.110165   10327 start.go:729] Will try again in 5 seconds ...
	I1008 11:08:40.856875   10327 cache.go:157] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1008 11:08:40.856930   10327 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 3.188079167s
	I1008 11:08:40.856960   10327 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1008 11:08:41.032839   10327 cache.go:157] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1008 11:08:41.032894   10327 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 3.364282959s
	I1008 11:08:41.032933   10327 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1008 11:08:41.511043   10327 cache.go:157] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1008 11:08:41.511103   10327 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 3.842333542s
	I1008 11:08:41.511134   10327 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1008 11:08:41.919949   10327 cache.go:157] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1008 11:08:41.920003   10327 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 4.251421292s
	I1008 11:08:41.920038   10327 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1008 11:08:42.128261   10327 cache.go:157] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1008 11:08:42.128307   10327 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 4.459616s
	I1008 11:08:42.128331   10327 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1008 11:08:45.110658   10327 start.go:360] acquireMachinesLock for no-preload-528000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:08:45.111190   10327 start.go:364] duration metric: took 453.834µs to acquireMachinesLock for "no-preload-528000"
	I1008 11:08:45.111327   10327 start.go:93] Provisioning new machine with config: &{Name:no-preload-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:no-preload-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:08:45.111577   10327 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:08:45.117174   10327 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 11:08:45.167533   10327 start.go:159] libmachine.API.Create for "no-preload-528000" (driver="qemu2")
	I1008 11:08:45.167599   10327 client.go:168] LocalClient.Create starting
	I1008 11:08:45.167796   10327 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:08:45.167889   10327 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:45.167908   10327 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:45.168000   10327 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:08:45.168057   10327 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:45.168087   10327 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:45.168707   10327 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:08:45.324878   10327 main.go:141] libmachine: Creating SSH key...
	I1008 11:08:45.440033   10327 main.go:141] libmachine: Creating Disk image...
	I1008 11:08:45.440040   10327 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:08:45.440233   10327 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/disk.qcow2
	I1008 11:08:45.450329   10327 main.go:141] libmachine: STDOUT: 
	I1008 11:08:45.450344   10327 main.go:141] libmachine: STDERR: 
	I1008 11:08:45.450416   10327 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/disk.qcow2 +20000M
	I1008 11:08:45.459094   10327 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:08:45.459110   10327 main.go:141] libmachine: STDERR: 
	I1008 11:08:45.459125   10327 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/disk.qcow2
	I1008 11:08:45.459130   10327 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:08:45.459140   10327 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:08:45.459179   10327 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:03:d6:b0:b2:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/disk.qcow2
	I1008 11:08:45.461065   10327 main.go:141] libmachine: STDOUT: 
	I1008 11:08:45.461090   10327 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:08:45.461103   10327 client.go:171] duration metric: took 293.503667ms to LocalClient.Create
	I1008 11:08:45.609574   10327 cache.go:157] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1008 11:08:45.609596   10327 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 7.94089825s
	I1008 11:08:45.609609   10327 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1008 11:08:45.609632   10327 cache.go:87] Successfully saved all images to host disk.
	I1008 11:08:47.461996   10327 start.go:128] duration metric: took 2.350432291s to createHost
	I1008 11:08:47.462038   10327 start.go:83] releasing machines lock for "no-preload-528000", held for 2.350864083s
	W1008 11:08:47.462418   10327 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-528000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-528000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:47.474950   10327 out.go:201] 
	W1008 11:08:47.479167   10327 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:08:47.479246   10327 out.go:270] * 
	* 
	W1008 11:08:47.481922   10327 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:08:47.493004   10327 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-528000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000: exit status 7 (72.566084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-528000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (10.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-528000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-528000 create -f testdata/busybox.yaml: exit status 1 (28.835875ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-528000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-528000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000: exit status 7 (34.614666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-528000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000: exit status 7 (34.885583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-528000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-528000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-528000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-528000 describe deploy/metrics-server -n kube-system: exit status 1 (27.035542ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-528000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-528000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000: exit status 7 (34.704291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-528000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-528000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-528000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.1893785s)

                                                
                                                
-- stdout --
	* [no-preload-528000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-528000" primary control-plane node in "no-preload-528000" cluster
	* Restarting existing qemu2 VM for "no-preload-528000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-528000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:08:51.078916   10403 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:08:51.079058   10403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:51.079062   10403 out.go:358] Setting ErrFile to fd 2...
	I1008 11:08:51.079064   10403 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:51.079179   10403 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:08:51.080214   10403 out.go:352] Setting JSON to false
	I1008 11:08:51.097896   10403 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5901,"bootTime":1728405030,"procs":564,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:08:51.097964   10403 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:08:51.102535   10403 out.go:177] * [no-preload-528000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:08:51.109392   10403 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:08:51.109447   10403 notify.go:220] Checking for updates...
	I1008 11:08:51.117551   10403 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:08:51.120562   10403 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:08:51.123496   10403 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:08:51.126580   10403 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:08:51.129508   10403 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:08:51.132792   10403 config.go:182] Loaded profile config "no-preload-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:08:51.133075   10403 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:08:51.137531   10403 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 11:08:51.144519   10403 start.go:297] selected driver: qemu2
	I1008 11:08:51.144527   10403 start.go:901] validating driver "qemu2" against &{Name:no-preload-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:no-preload-528000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:08:51.144590   10403 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:08:51.147096   10403 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:08:51.147121   10403 cni.go:84] Creating CNI manager for ""
	I1008 11:08:51.147147   10403 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:08:51.147173   10403 start.go:340] cluster config:
	{Name:no-preload-528000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:no-preload-528000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVers
ion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:08:51.151699   10403 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:51.159530   10403 out.go:177] * Starting "no-preload-528000" primary control-plane node in "no-preload-528000" cluster
	I1008 11:08:51.163587   10403 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:08:51.163674   10403 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/no-preload-528000/config.json ...
	I1008 11:08:51.163751   10403 cache.go:107] acquiring lock: {Name:mk5604f791a1ef2f4d9ad107fc168a2b664c55e6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:51.163754   10403 cache.go:107] acquiring lock: {Name:mk169ecda7c5762f2a6b09160237e58058dfebe4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:51.163787   10403 cache.go:107] acquiring lock: {Name:mkb2058363884a7b469f1a96782a048c745ef061 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:51.163847   10403 cache.go:115] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1008 11:08:51.163854   10403 cache.go:115] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 exists
	I1008 11:08:51.163859   10403 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 115.958µs
	I1008 11:08:51.163863   10403 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.31.1" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1" took 131.667µs
	I1008 11:08:51.163868   10403 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.31.1 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.31.1 succeeded
	I1008 11:08:51.163876   10403 cache.go:107] acquiring lock: {Name:mk4db5c21e695dd297bb579c67143a884341926b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:51.163892   10403 cache.go:115] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 exists
	I1008 11:08:51.163887   10403 cache.go:107] acquiring lock: {Name:mk88f9f64be3c6bde5d64477e3d400cafb434059 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:51.163899   10403 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.31.1" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1" took 148.291µs
	I1008 11:08:51.163910   10403 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.31.1 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.31.1 succeeded
	I1008 11:08:51.163868   10403 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1008 11:08:51.163937   10403 cache.go:115] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 exists
	I1008 11:08:51.163943   10403 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.31.1" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1" took 67.958µs
	I1008 11:08:51.163956   10403 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.31.1 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.31.1 succeeded
	I1008 11:08:51.163909   10403 cache.go:107] acquiring lock: {Name:mk603d4059dbcab3b157b51b107fd27ae95068d4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:51.163930   10403 cache.go:107] acquiring lock: {Name:mk021b5bc5f0a0a0b8d69465a949c2245379398e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:51.163964   10403 cache.go:115] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 exists
	I1008 11:08:51.164021   10403 cache.go:115] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 exists
	I1008 11:08:51.164027   10403 cache.go:96] cache image "registry.k8s.io/pause:3.10" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10" took 172.208µs
	I1008 11:08:51.164036   10403 cache.go:80] save to tar file registry.k8s.io/pause:3.10 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10 succeeded
	I1008 11:08:51.163956   10403 cache.go:107] acquiring lock: {Name:mk61708dc1953192bfba7f02a71d61889b97d937 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:51.164045   10403 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.3" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3" took 158.708µs
	I1008 11:08:51.164050   10403 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.3 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.3 succeeded
	I1008 11:08:51.164042   10403 cache.go:115] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 exists
	I1008 11:08:51.164067   10403 cache.go:96] cache image "registry.k8s.io/etcd:3.5.15-0" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0" took 177.875µs
	I1008 11:08:51.164074   10403 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.15-0 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.15-0 succeeded
	I1008 11:08:51.164103   10403 cache.go:115] /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 exists
	I1008 11:08:51.164108   10403 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.31.1" -> "/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1" took 208.541µs
	I1008 11:08:51.164112   10403 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.31.1 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.31.1 succeeded
	I1008 11:08:51.164117   10403 cache.go:87] Successfully saved all images to host disk.
	I1008 11:08:51.164123   10403 start.go:360] acquireMachinesLock for no-preload-528000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:08:51.164159   10403 start.go:364] duration metric: took 29.625µs to acquireMachinesLock for "no-preload-528000"
	I1008 11:08:51.164167   10403 start.go:96] Skipping create...Using existing machine configuration
	I1008 11:08:51.164171   10403 fix.go:54] fixHost starting: 
	I1008 11:08:51.164293   10403 fix.go:112] recreateIfNeeded on no-preload-528000: state=Stopped err=<nil>
	W1008 11:08:51.164303   10403 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 11:08:51.172502   10403 out.go:177] * Restarting existing qemu2 VM for "no-preload-528000" ...
	I1008 11:08:51.176357   10403 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:08:51.176404   10403 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:03:d6:b0:b2:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/disk.qcow2
	I1008 11:08:51.178640   10403 main.go:141] libmachine: STDOUT: 
	I1008 11:08:51.178661   10403 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:08:51.178688   10403 fix.go:56] duration metric: took 14.517083ms for fixHost
	I1008 11:08:51.178693   10403 start.go:83] releasing machines lock for "no-preload-528000", held for 14.530209ms
	W1008 11:08:51.178700   10403 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:08:51.178731   10403 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:51.178739   10403 start.go:729] Will try again in 5 seconds ...
	I1008 11:08:56.180776   10403 start.go:360] acquireMachinesLock for no-preload-528000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:08:56.181197   10403 start.go:364] duration metric: took 319µs to acquireMachinesLock for "no-preload-528000"
	I1008 11:08:56.181309   10403 start.go:96] Skipping create...Using existing machine configuration
	I1008 11:08:56.181329   10403 fix.go:54] fixHost starting: 
	I1008 11:08:56.181974   10403 fix.go:112] recreateIfNeeded on no-preload-528000: state=Stopped err=<nil>
	W1008 11:08:56.182005   10403 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 11:08:56.185381   10403 out.go:177] * Restarting existing qemu2 VM for "no-preload-528000" ...
	I1008 11:08:56.188370   10403 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:08:56.188535   10403 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/qemu.pid -device virtio-net-pci,netdev=net0,mac=52:03:d6:b0:b2:02 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/no-preload-528000/disk.qcow2
	I1008 11:08:56.198502   10403 main.go:141] libmachine: STDOUT: 
	I1008 11:08:56.198597   10403 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:08:56.198668   10403 fix.go:56] duration metric: took 17.340459ms for fixHost
	I1008 11:08:56.198687   10403 start.go:83] releasing machines lock for "no-preload-528000", held for 17.467875ms
	W1008 11:08:56.198851   10403 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-528000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-528000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:56.207334   10403 out.go:201] 
	W1008 11:08:56.210457   10403 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:08:56.210502   10403 out.go:270] * 
	* 
	W1008 11:08:56.212994   10403 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:08:56.226300   10403 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-528000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000: exit status 7 (73.320542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-528000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-528000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000: exit status 7 (35.963709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-528000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-528000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-528000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-528000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.380375ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-528000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-528000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000: exit status 7 (34.679208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-528000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-528000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000: exit status 7 (33.76075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-528000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-528000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-528000 --alsologtostderr -v=1: exit status 83 (44.827708ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-528000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-528000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:08:56.519159   10422 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:08:56.519350   10422 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:56.519354   10422 out.go:358] Setting ErrFile to fd 2...
	I1008 11:08:56.519356   10422 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:56.519488   10422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:08:56.519760   10422 out.go:352] Setting JSON to false
	I1008 11:08:56.519767   10422 mustload.go:65] Loading cluster: no-preload-528000
	I1008 11:08:56.519969   10422 config.go:182] Loaded profile config "no-preload-528000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:08:56.524275   10422 out.go:177] * The control-plane node no-preload-528000 host is not running: state=Stopped
	I1008 11:08:56.528329   10422 out.go:177]   To start a cluster, run: "minikube start -p no-preload-528000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-528000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000: exit status 7 (34.075417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-528000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000: exit status 7 (34.787458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-528000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-149000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-149000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.776515458s)

                                                
                                                
-- stdout --
	* [embed-certs-149000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-149000" primary control-plane node in "embed-certs-149000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-149000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:08:56.860551   10439 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:08:56.860697   10439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:56.860700   10439 out.go:358] Setting ErrFile to fd 2...
	I1008 11:08:56.860702   10439 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:08:56.860839   10439 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:08:56.861940   10439 out.go:352] Setting JSON to false
	I1008 11:08:56.879655   10439 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5906,"bootTime":1728405030,"procs":564,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:08:56.879716   10439 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:08:56.884209   10439 out.go:177] * [embed-certs-149000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:08:56.891275   10439 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:08:56.891330   10439 notify.go:220] Checking for updates...
	I1008 11:08:56.898245   10439 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:08:56.899604   10439 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:08:56.902245   10439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:08:56.905208   10439 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:08:56.908264   10439 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:08:56.911600   10439 config.go:182] Loaded profile config "cert-expiration-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:08:56.911671   10439 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:08:56.911726   10439 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:08:56.916265   10439 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:08:56.923163   10439 start.go:297] selected driver: qemu2
	I1008 11:08:56.923172   10439 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:08:56.923178   10439 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:08:56.925609   10439 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:08:56.929301   10439 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:08:56.932324   10439 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:08:56.932340   10439 cni.go:84] Creating CNI manager for ""
	I1008 11:08:56.932360   10439 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:08:56.932368   10439 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 11:08:56.932404   10439 start.go:340] cluster config:
	{Name:embed-certs-149000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:08:56.936950   10439 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:08:56.945267   10439 out.go:177] * Starting "embed-certs-149000" primary control-plane node in "embed-certs-149000" cluster
	I1008 11:08:56.949238   10439 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:08:56.949256   10439 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:08:56.949267   10439 cache.go:56] Caching tarball of preloaded images
	I1008 11:08:56.949350   10439 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:08:56.949356   10439 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:08:56.949431   10439 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/embed-certs-149000/config.json ...
	I1008 11:08:56.949442   10439 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/embed-certs-149000/config.json: {Name:mk1d4bc3be1b6a33f6e6c65549df4a19df96adcc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:08:56.949700   10439 start.go:360] acquireMachinesLock for embed-certs-149000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:08:56.949751   10439 start.go:364] duration metric: took 44.833µs to acquireMachinesLock for "embed-certs-149000"
	I1008 11:08:56.949763   10439 start.go:93] Provisioning new machine with config: &{Name:embed-certs-149000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:08:56.949794   10439 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:08:56.953326   10439 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 11:08:56.970807   10439 start.go:159] libmachine.API.Create for "embed-certs-149000" (driver="qemu2")
	I1008 11:08:56.970837   10439 client.go:168] LocalClient.Create starting
	I1008 11:08:56.970919   10439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:08:56.970958   10439 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:56.970967   10439 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:56.971016   10439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:08:56.971048   10439 main.go:141] libmachine: Decoding PEM data...
	I1008 11:08:56.971062   10439 main.go:141] libmachine: Parsing certificate...
	I1008 11:08:56.971506   10439 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:08:57.118979   10439 main.go:141] libmachine: Creating SSH key...
	I1008 11:08:57.175325   10439 main.go:141] libmachine: Creating Disk image...
	I1008 11:08:57.175331   10439 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:08:57.175534   10439 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/disk.qcow2
	I1008 11:08:57.185310   10439 main.go:141] libmachine: STDOUT: 
	I1008 11:08:57.185324   10439 main.go:141] libmachine: STDERR: 
	I1008 11:08:57.185379   10439 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/disk.qcow2 +20000M
	I1008 11:08:57.193955   10439 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:08:57.194023   10439 main.go:141] libmachine: STDERR: 
	I1008 11:08:57.194039   10439 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/disk.qcow2
	I1008 11:08:57.194045   10439 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:08:57.194058   10439 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:08:57.194086   10439 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=12:cc:bc:42:9a:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/disk.qcow2
	I1008 11:08:57.195883   10439 main.go:141] libmachine: STDOUT: 
	I1008 11:08:57.195954   10439 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:08:57.195984   10439 client.go:171] duration metric: took 225.144958ms to LocalClient.Create
	I1008 11:08:59.198221   10439 start.go:128] duration metric: took 2.248434916s to createHost
	I1008 11:08:59.198313   10439 start.go:83] releasing machines lock for "embed-certs-149000", held for 2.248587834s
	W1008 11:08:59.198405   10439 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:59.210397   10439 out.go:177] * Deleting "embed-certs-149000" in qemu2 ...
	W1008 11:08:59.237058   10439 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:08:59.237084   10439 start.go:729] Will try again in 5 seconds ...
	I1008 11:09:04.239203   10439 start.go:360] acquireMachinesLock for embed-certs-149000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:09:04.239886   10439 start.go:364] duration metric: took 576µs to acquireMachinesLock for "embed-certs-149000"
	I1008 11:09:04.240021   10439 start.go:93] Provisioning new machine with config: &{Name:embed-certs-149000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:embed-certs-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:09:04.240449   10439 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:09:04.252956   10439 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 11:09:04.304587   10439 start.go:159] libmachine.API.Create for "embed-certs-149000" (driver="qemu2")
	I1008 11:09:04.304644   10439 client.go:168] LocalClient.Create starting
	I1008 11:09:04.304783   10439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:09:04.304864   10439 main.go:141] libmachine: Decoding PEM data...
	I1008 11:09:04.304877   10439 main.go:141] libmachine: Parsing certificate...
	I1008 11:09:04.304940   10439 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:09:04.304996   10439 main.go:141] libmachine: Decoding PEM data...
	I1008 11:09:04.305006   10439 main.go:141] libmachine: Parsing certificate...
	I1008 11:09:04.305743   10439 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:09:04.463280   10439 main.go:141] libmachine: Creating SSH key...
	I1008 11:09:04.541477   10439 main.go:141] libmachine: Creating Disk image...
	I1008 11:09:04.541486   10439 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:09:04.541705   10439 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/disk.qcow2
	I1008 11:09:04.551428   10439 main.go:141] libmachine: STDOUT: 
	I1008 11:09:04.551449   10439 main.go:141] libmachine: STDERR: 
	I1008 11:09:04.551511   10439 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/disk.qcow2 +20000M
	I1008 11:09:04.559921   10439 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:09:04.559936   10439 main.go:141] libmachine: STDERR: 
	I1008 11:09:04.559946   10439 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/disk.qcow2
	I1008 11:09:04.559964   10439 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:09:04.559974   10439 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:09:04.559999   10439 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:f9:7d:3e:ca:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/disk.qcow2
	I1008 11:09:04.561763   10439 main.go:141] libmachine: STDOUT: 
	I1008 11:09:04.561777   10439 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:09:04.561791   10439 client.go:171] duration metric: took 257.145958ms to LocalClient.Create
	I1008 11:09:06.563975   10439 start.go:128] duration metric: took 2.323522708s to createHost
	I1008 11:09:06.564073   10439 start.go:83] releasing machines lock for "embed-certs-149000", held for 2.324197125s
	W1008 11:09:06.564789   10439 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-149000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-149000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:09:06.577601   10439 out.go:201] 
	W1008 11:09:06.581612   10439 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:09:06.581692   10439 out.go:270] * 
	* 
	W1008 11:09:06.584532   10439 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:09:06.592556   10439 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-149000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000: exit status 7 (71.487083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-149000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-149000 create -f testdata/busybox.yaml: exit status 1 (28.821125ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-149000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-149000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000: exit status 7 (34.919709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-149000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000: exit status 7 (34.079584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-149000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-149000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-149000 describe deploy/metrics-server -n kube-system: exit status 1 (27.258333ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-149000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-149000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000: exit status 7 (34.670417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-149000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-149000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.198501333s)

                                                
                                                
-- stdout --
	* [embed-certs-149000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-149000" primary control-plane node in "embed-certs-149000" cluster
	* Restarting existing qemu2 VM for "embed-certs-149000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-149000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:09:09.087936   10481 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:09:09.088112   10481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:09.088115   10481 out.go:358] Setting ErrFile to fd 2...
	I1008 11:09:09.088118   10481 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:09.088259   10481 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:09:09.089333   10481 out.go:352] Setting JSON to false
	I1008 11:09:09.106920   10481 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5919,"bootTime":1728405030,"procs":562,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:09:09.106989   10481 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:09:09.110534   10481 out.go:177] * [embed-certs-149000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:09:09.117501   10481 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:09:09.117549   10481 notify.go:220] Checking for updates...
	I1008 11:09:09.125431   10481 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:09:09.128399   10481 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:09:09.131452   10481 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:09:09.134438   10481 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:09:09.137386   10481 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:09:09.140751   10481 config.go:182] Loaded profile config "embed-certs-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:09:09.141043   10481 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:09:09.145456   10481 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 11:09:09.157428   10481 start.go:297] selected driver: qemu2
	I1008 11:09:09.157438   10481 start.go:901] validating driver "qemu2" against &{Name:embed-certs-149000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:embed-certs-149000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:09:09.157506   10481 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:09:09.160221   10481 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:09:09.160249   10481 cni.go:84] Creating CNI manager for ""
	I1008 11:09:09.160270   10481 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:09:09.160297   10481 start.go:340] cluster config:
	{Name:embed-certs-149000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-149000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:09:09.165198   10481 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:09:09.173384   10481 out.go:177] * Starting "embed-certs-149000" primary control-plane node in "embed-certs-149000" cluster
	I1008 11:09:09.177423   10481 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:09:09.177439   10481 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:09:09.177449   10481 cache.go:56] Caching tarball of preloaded images
	I1008 11:09:09.177541   10481 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:09:09.177547   10481 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:09:09.177609   10481 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/embed-certs-149000/config.json ...
	I1008 11:09:09.178073   10481 start.go:360] acquireMachinesLock for embed-certs-149000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:09:09.178106   10481 start.go:364] duration metric: took 27.584µs to acquireMachinesLock for "embed-certs-149000"
	I1008 11:09:09.178115   10481 start.go:96] Skipping create...Using existing machine configuration
	I1008 11:09:09.178118   10481 fix.go:54] fixHost starting: 
	I1008 11:09:09.178247   10481 fix.go:112] recreateIfNeeded on embed-certs-149000: state=Stopped err=<nil>
	W1008 11:09:09.178256   10481 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 11:09:09.182421   10481 out.go:177] * Restarting existing qemu2 VM for "embed-certs-149000" ...
	I1008 11:09:09.190299   10481 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:09:09.190341   10481 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:f9:7d:3e:ca:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/disk.qcow2
	I1008 11:09:09.192506   10481 main.go:141] libmachine: STDOUT: 
	I1008 11:09:09.192525   10481 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:09:09.192562   10481 fix.go:56] duration metric: took 14.441208ms for fixHost
	I1008 11:09:09.192566   10481 start.go:83] releasing machines lock for "embed-certs-149000", held for 14.455917ms
	W1008 11:09:09.192573   10481 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:09:09.192620   10481 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:09:09.192625   10481 start.go:729] Will try again in 5 seconds ...
	I1008 11:09:14.194769   10481 start.go:360] acquireMachinesLock for embed-certs-149000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:09:14.195190   10481 start.go:364] duration metric: took 297.334µs to acquireMachinesLock for "embed-certs-149000"
	I1008 11:09:14.195324   10481 start.go:96] Skipping create...Using existing machine configuration
	I1008 11:09:14.195350   10481 fix.go:54] fixHost starting: 
	I1008 11:09:14.196095   10481 fix.go:112] recreateIfNeeded on embed-certs-149000: state=Stopped err=<nil>
	W1008 11:09:14.196127   10481 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 11:09:14.204658   10481 out.go:177] * Restarting existing qemu2 VM for "embed-certs-149000" ...
	I1008 11:09:14.207583   10481 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:09:14.207832   10481 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/qemu.pid -device virtio-net-pci,netdev=net0,mac=56:f9:7d:3e:ca:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/embed-certs-149000/disk.qcow2
	I1008 11:09:14.218546   10481 main.go:141] libmachine: STDOUT: 
	I1008 11:09:14.218601   10481 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:09:14.218681   10481 fix.go:56] duration metric: took 23.338667ms for fixHost
	I1008 11:09:14.218696   10481 start.go:83] releasing machines lock for "embed-certs-149000", held for 23.483834ms
	W1008 11:09:14.218887   10481 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-149000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-149000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:09:14.226530   10481 out.go:201] 
	W1008 11:09:14.230538   10481 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:09:14.230560   10481 out.go:270] * 
	* 
	W1008 11:09:14.233449   10481 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:09:14.241526   10481 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-149000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000: exit status 7 (72.615917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-149000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000: exit status 7 (35.783167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-149000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-149000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-149000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.959ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-149000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-149000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000: exit status 7 (33.718042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-149000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000: exit status 7 (33.55425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-149000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-149000 --alsologtostderr -v=1: exit status 83 (43.897167ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-149000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-149000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:09:14.534544   10510 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:09:14.534745   10510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:14.534748   10510 out.go:358] Setting ErrFile to fd 2...
	I1008 11:09:14.534751   10510 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:14.534868   10510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:09:14.535085   10510 out.go:352] Setting JSON to false
	I1008 11:09:14.535093   10510 mustload.go:65] Loading cluster: embed-certs-149000
	I1008 11:09:14.535303   10510 config.go:182] Loaded profile config "embed-certs-149000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:09:14.538296   10510 out.go:177] * The control-plane node embed-certs-149000 host is not running: state=Stopped
	I1008 11:09:14.542311   10510 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-149000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-149000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000: exit status 7 (33.294333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-149000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000: exit status 7 (33.8525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-149000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-186000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-186000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.988744125s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-186000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-186000" primary control-plane node in "default-k8s-diff-port-186000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-186000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:09:14.992226   10534 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:09:14.992370   10534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:14.992374   10534 out.go:358] Setting ErrFile to fd 2...
	I1008 11:09:14.992377   10534 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:14.992537   10534 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:09:14.993720   10534 out.go:352] Setting JSON to false
	I1008 11:09:15.011651   10534 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5925,"bootTime":1728405030,"procs":566,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:09:15.011711   10534 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:09:15.016298   10534 out.go:177] * [default-k8s-diff-port-186000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:09:15.023270   10534 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:09:15.023332   10534 notify.go:220] Checking for updates...
	I1008 11:09:15.031202   10534 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:09:15.034236   10534 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:09:15.037261   10534 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:09:15.040199   10534 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:09:15.043261   10534 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:09:15.046671   10534 config.go:182] Loaded profile config "cert-expiration-831000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:09:15.046745   10534 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:09:15.046792   10534 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:09:15.051183   10534 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:09:15.058259   10534 start.go:297] selected driver: qemu2
	I1008 11:09:15.058269   10534 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:09:15.058279   10534 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:09:15.060782   10534 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 11:09:15.064165   10534 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:09:15.067296   10534 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:09:15.067319   10534 cni.go:84] Creating CNI manager for ""
	I1008 11:09:15.067353   10534 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:09:15.067363   10534 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 11:09:15.067409   10534 start.go:340] cluster config:
	{Name:default-k8s-diff-port-186000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-186000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:09:15.072178   10534 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:09:15.080200   10534 out.go:177] * Starting "default-k8s-diff-port-186000" primary control-plane node in "default-k8s-diff-port-186000" cluster
	I1008 11:09:15.084237   10534 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:09:15.084274   10534 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:09:15.084290   10534 cache.go:56] Caching tarball of preloaded images
	I1008 11:09:15.084384   10534 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:09:15.084390   10534 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:09:15.084454   10534 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/default-k8s-diff-port-186000/config.json ...
	I1008 11:09:15.084466   10534 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/default-k8s-diff-port-186000/config.json: {Name:mkd0019b86f37dfd397c3b046d4a5b1fe22ff266 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:09:15.084855   10534 start.go:360] acquireMachinesLock for default-k8s-diff-port-186000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:09:15.084910   10534 start.go:364] duration metric: took 46.792µs to acquireMachinesLock for "default-k8s-diff-port-186000"
	I1008 11:09:15.084922   10534 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-186000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-186000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:09:15.084965   10534 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:09:15.088188   10534 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 11:09:15.106135   10534 start.go:159] libmachine.API.Create for "default-k8s-diff-port-186000" (driver="qemu2")
	I1008 11:09:15.106166   10534 client.go:168] LocalClient.Create starting
	I1008 11:09:15.106252   10534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:09:15.106290   10534 main.go:141] libmachine: Decoding PEM data...
	I1008 11:09:15.106300   10534 main.go:141] libmachine: Parsing certificate...
	I1008 11:09:15.106351   10534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:09:15.106387   10534 main.go:141] libmachine: Decoding PEM data...
	I1008 11:09:15.106396   10534 main.go:141] libmachine: Parsing certificate...
	I1008 11:09:15.106855   10534 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:09:15.252855   10534 main.go:141] libmachine: Creating SSH key...
	I1008 11:09:15.528898   10534 main.go:141] libmachine: Creating Disk image...
	I1008 11:09:15.528913   10534 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:09:15.529202   10534 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/disk.qcow2
	I1008 11:09:15.539881   10534 main.go:141] libmachine: STDOUT: 
	I1008 11:09:15.539906   10534 main.go:141] libmachine: STDERR: 
	I1008 11:09:15.539973   10534 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/disk.qcow2 +20000M
	I1008 11:09:15.548487   10534 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:09:15.548502   10534 main.go:141] libmachine: STDERR: 
	I1008 11:09:15.548522   10534 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/disk.qcow2
	I1008 11:09:15.548531   10534 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:09:15.548550   10534 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:09:15.548581   10534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c6:29:ae:44:ff:dc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/disk.qcow2
	I1008 11:09:15.550442   10534 main.go:141] libmachine: STDOUT: 
	I1008 11:09:15.550456   10534 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:09:15.550475   10534 client.go:171] duration metric: took 444.31075ms to LocalClient.Create
	I1008 11:09:17.552698   10534 start.go:128] duration metric: took 2.467752791s to createHost
	I1008 11:09:17.552775   10534 start.go:83] releasing machines lock for "default-k8s-diff-port-186000", held for 2.467897333s
	W1008 11:09:17.552885   10534 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:09:17.571286   10534 out.go:177] * Deleting "default-k8s-diff-port-186000" in qemu2 ...
	W1008 11:09:17.597542   10534 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:09:17.597573   10534 start.go:729] Will try again in 5 seconds ...
	I1008 11:09:22.599750   10534 start.go:360] acquireMachinesLock for default-k8s-diff-port-186000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:09:22.600359   10534 start.go:364] duration metric: took 500.083µs to acquireMachinesLock for "default-k8s-diff-port-186000"
	I1008 11:09:22.600496   10534 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-186000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-186000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:09:22.600764   10534 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:09:22.610349   10534 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 11:09:22.659170   10534 start.go:159] libmachine.API.Create for "default-k8s-diff-port-186000" (driver="qemu2")
	I1008 11:09:22.659225   10534 client.go:168] LocalClient.Create starting
	I1008 11:09:22.659370   10534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:09:22.659472   10534 main.go:141] libmachine: Decoding PEM data...
	I1008 11:09:22.659493   10534 main.go:141] libmachine: Parsing certificate...
	I1008 11:09:22.659567   10534 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:09:22.659623   10534 main.go:141] libmachine: Decoding PEM data...
	I1008 11:09:22.659643   10534 main.go:141] libmachine: Parsing certificate...
	I1008 11:09:22.660420   10534 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:09:22.847494   10534 main.go:141] libmachine: Creating SSH key...
	I1008 11:09:22.884090   10534 main.go:141] libmachine: Creating Disk image...
	I1008 11:09:22.884095   10534 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:09:22.884289   10534 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/disk.qcow2
	I1008 11:09:22.894031   10534 main.go:141] libmachine: STDOUT: 
	I1008 11:09:22.894049   10534 main.go:141] libmachine: STDERR: 
	I1008 11:09:22.894109   10534 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/disk.qcow2 +20000M
	I1008 11:09:22.902432   10534 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:09:22.902448   10534 main.go:141] libmachine: STDERR: 
	I1008 11:09:22.902461   10534 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/disk.qcow2
	I1008 11:09:22.902466   10534 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:09:22.902477   10534 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:09:22.902515   10534 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:b8:4c:25:7a:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/disk.qcow2
	I1008 11:09:22.904352   10534 main.go:141] libmachine: STDOUT: 
	I1008 11:09:22.904370   10534 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:09:22.904383   10534 client.go:171] duration metric: took 245.156167ms to LocalClient.Create
	I1008 11:09:24.906517   10534 start.go:128] duration metric: took 2.305766084s to createHost
	I1008 11:09:24.906575   10534 start.go:83] releasing machines lock for "default-k8s-diff-port-186000", held for 2.306232917s
	W1008 11:09:24.906958   10534 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-186000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-186000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:09:24.915597   10534 out.go:201] 
	W1008 11:09:24.921669   10534 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:09:24.921716   10534 out.go:270] * 
	* 
	W1008 11:09:24.924336   10534 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:09:24.933527   10534 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-186000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000: exit status 7 (70.066458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-186000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (10.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-197000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-197000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (9.866357625s)

                                                
                                                
-- stdout --
	* [newest-cni-197000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-197000" primary control-plane node in "newest-cni-197000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-197000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:09:17.814377   10550 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:09:17.814534   10550 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:17.814537   10550 out.go:358] Setting ErrFile to fd 2...
	I1008 11:09:17.814539   10550 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:17.814673   10550 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:09:17.815782   10550 out.go:352] Setting JSON to false
	I1008 11:09:17.833438   10550 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5927,"bootTime":1728405030,"procs":566,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:09:17.833512   10550 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:09:17.840328   10550 out.go:177] * [newest-cni-197000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:09:17.847241   10550 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:09:17.847287   10550 notify.go:220] Checking for updates...
	I1008 11:09:17.854223   10550 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:09:17.857273   10550 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:09:17.860236   10550 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:09:17.863138   10550 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:09:17.866213   10550 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:09:17.869619   10550 config.go:182] Loaded profile config "default-k8s-diff-port-186000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:09:17.869691   10550 config.go:182] Loaded profile config "multinode-437000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:09:17.869742   10550 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:09:17.873184   10550 out.go:177] * Using the qemu2 driver based on user configuration
	I1008 11:09:17.880228   10550 start.go:297] selected driver: qemu2
	I1008 11:09:17.880236   10550 start.go:901] validating driver "qemu2" against <nil>
	I1008 11:09:17.880242   10550 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:09:17.882662   10550 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1008 11:09:17.882702   10550 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1008 11:09:17.890226   10550 out.go:177] * Automatically selected the socket_vmnet network
	I1008 11:09:17.893333   10550 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1008 11:09:17.893355   10550 cni.go:84] Creating CNI manager for ""
	I1008 11:09:17.893380   10550 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:09:17.893384   10550 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 11:09:17.893422   10550 start.go:340] cluster config:
	{Name:newest-cni-197000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-197000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:09:17.898282   10550 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:09:17.906267   10550 out.go:177] * Starting "newest-cni-197000" primary control-plane node in "newest-cni-197000" cluster
	I1008 11:09:17.910244   10550 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:09:17.910262   10550 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:09:17.910272   10550 cache.go:56] Caching tarball of preloaded images
	I1008 11:09:17.910357   10550 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:09:17.910363   10550 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:09:17.910429   10550 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/newest-cni-197000/config.json ...
	I1008 11:09:17.910440   10550 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/newest-cni-197000/config.json: {Name:mk62437613e6378299ce65c079ea54bacb50e3b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 11:09:17.910707   10550 start.go:360] acquireMachinesLock for newest-cni-197000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:09:17.910759   10550 start.go:364] duration metric: took 45.375µs to acquireMachinesLock for "newest-cni-197000"
	I1008 11:09:17.910770   10550 start.go:93] Provisioning new machine with config: &{Name:newest-cni-197000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-197000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:09:17.910801   10550 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:09:17.915232   10550 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 11:09:17.933262   10550 start.go:159] libmachine.API.Create for "newest-cni-197000" (driver="qemu2")
	I1008 11:09:17.933297   10550 client.go:168] LocalClient.Create starting
	I1008 11:09:17.933371   10550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:09:17.933417   10550 main.go:141] libmachine: Decoding PEM data...
	I1008 11:09:17.933432   10550 main.go:141] libmachine: Parsing certificate...
	I1008 11:09:17.933474   10550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:09:17.933508   10550 main.go:141] libmachine: Decoding PEM data...
	I1008 11:09:17.933517   10550 main.go:141] libmachine: Parsing certificate...
	I1008 11:09:17.933928   10550 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:09:18.079155   10550 main.go:141] libmachine: Creating SSH key...
	I1008 11:09:18.274719   10550 main.go:141] libmachine: Creating Disk image...
	I1008 11:09:18.274730   10550 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:09:18.274945   10550 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/disk.qcow2
	I1008 11:09:18.285295   10550 main.go:141] libmachine: STDOUT: 
	I1008 11:09:18.285309   10550 main.go:141] libmachine: STDERR: 
	I1008 11:09:18.285367   10550 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/disk.qcow2 +20000M
	I1008 11:09:18.293854   10550 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:09:18.293877   10550 main.go:141] libmachine: STDERR: 
	I1008 11:09:18.293892   10550 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/disk.qcow2
	I1008 11:09:18.293898   10550 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:09:18.293910   10550 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:09:18.293937   10550 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:d5:9e:7b:ab:f8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/disk.qcow2
	I1008 11:09:18.295777   10550 main.go:141] libmachine: STDOUT: 
	I1008 11:09:18.295797   10550 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:09:18.295820   10550 client.go:171] duration metric: took 362.523292ms to LocalClient.Create
	I1008 11:09:20.298027   10550 start.go:128] duration metric: took 2.387240709s to createHost
	I1008 11:09:20.298129   10550 start.go:83] releasing machines lock for "newest-cni-197000", held for 2.387391875s
	W1008 11:09:20.298214   10550 start.go:714] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:09:20.309227   10550 out.go:177] * Deleting "newest-cni-197000" in qemu2 ...
	W1008 11:09:20.334660   10550 out.go:270] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:09:20.334694   10550 start.go:729] Will try again in 5 seconds ...
	I1008 11:09:25.336355   10550 start.go:360] acquireMachinesLock for newest-cni-197000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:09:25.336493   10550 start.go:364] duration metric: took 111.208µs to acquireMachinesLock for "newest-cni-197000"
	I1008 11:09:25.336526   10550 start.go:93] Provisioning new machine with config: &{Name:newest-cni-197000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.31.1 ClusterName:newest-cni-197000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1008 11:09:25.336597   10550 start.go:125] createHost starting for "" (driver="qemu2")
	I1008 11:09:25.342746   10550 out.go:235] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1008 11:09:25.361635   10550 start.go:159] libmachine.API.Create for "newest-cni-197000" (driver="qemu2")
	I1008 11:09:25.361660   10550 client.go:168] LocalClient.Create starting
	I1008 11:09:25.361713   10550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/ca.pem
	I1008 11:09:25.361746   10550 main.go:141] libmachine: Decoding PEM data...
	I1008 11:09:25.361757   10550 main.go:141] libmachine: Parsing certificate...
	I1008 11:09:25.361791   10550 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19774-6384/.minikube/certs/cert.pem
	I1008 11:09:25.361809   10550 main.go:141] libmachine: Decoding PEM data...
	I1008 11:09:25.361816   10550 main.go:141] libmachine: Parsing certificate...
	I1008 11:09:25.362138   10550 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso...
	I1008 11:09:25.519777   10550 main.go:141] libmachine: Creating SSH key...
	I1008 11:09:25.578824   10550 main.go:141] libmachine: Creating Disk image...
	I1008 11:09:25.578830   10550 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I1008 11:09:25.579044   10550 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/disk.qcow2.raw /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/disk.qcow2
	I1008 11:09:25.589108   10550 main.go:141] libmachine: STDOUT: 
	I1008 11:09:25.589125   10550 main.go:141] libmachine: STDERR: 
	I1008 11:09:25.589209   10550 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/disk.qcow2 +20000M
	I1008 11:09:25.597785   10550 main.go:141] libmachine: STDOUT: Image resized.
	
	I1008 11:09:25.597807   10550 main.go:141] libmachine: STDERR: 
	I1008 11:09:25.597819   10550 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/disk.qcow2.raw and /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/disk.qcow2
	I1008 11:09:25.597824   10550 main.go:141] libmachine: Starting QEMU VM...
	I1008 11:09:25.597832   10550 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:09:25.597865   10550 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:df:72:b0:45:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/disk.qcow2
	I1008 11:09:25.599691   10550 main.go:141] libmachine: STDOUT: 
	I1008 11:09:25.599704   10550 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:09:25.599717   10550 client.go:171] duration metric: took 238.056417ms to LocalClient.Create
	I1008 11:09:27.601960   10550 start.go:128] duration metric: took 2.265373s to createHost
	I1008 11:09:27.602060   10550 start.go:83] releasing machines lock for "newest-cni-197000", held for 2.265558208s
	W1008 11:09:27.602483   10550 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-197000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-197000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:09:27.616176   10550 out.go:201] 
	W1008 11:09:27.620269   10550 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:09:27.620293   10550 out.go:270] * 
	* 
	W1008 11:09:27.622936   10550 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:09:27.634090   10550 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-197000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000: exit status 7 (69.842167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-197000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-186000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-186000 create -f testdata/busybox.yaml: exit status 1 (29.184916ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-186000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-186000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000: exit status 7 (32.960417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-186000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000: exit status 7 (32.974292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-186000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-186000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-186000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-186000 describe deploy/metrics-server -n kube-system: exit status 1 (27.579334ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-186000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-186000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000: exit status 7 (33.214041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-186000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-186000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-186000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.192552541s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-186000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-186000" primary control-plane node in "default-k8s-diff-port-186000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-186000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-186000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:09:28.923514   10618 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:09:28.923677   10618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:28.923680   10618 out.go:358] Setting ErrFile to fd 2...
	I1008 11:09:28.923683   10618 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:28.923811   10618 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:09:28.924879   10618 out.go:352] Setting JSON to false
	I1008 11:09:28.942822   10618 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5938,"bootTime":1728405030,"procs":565,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:09:28.942888   10618 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:09:28.947762   10618 out.go:177] * [default-k8s-diff-port-186000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:09:28.954670   10618 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:09:28.954713   10618 notify.go:220] Checking for updates...
	I1008 11:09:28.961744   10618 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:09:28.963079   10618 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:09:28.965732   10618 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:09:28.968774   10618 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:09:28.971781   10618 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:09:28.975125   10618 config.go:182] Loaded profile config "default-k8s-diff-port-186000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:09:28.975382   10618 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:09:28.979772   10618 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 11:09:28.986719   10618 start.go:297] selected driver: qemu2
	I1008 11:09:28.986726   10618 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-186000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-186000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:09:28.986775   10618 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:09:28.989267   10618 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1008 11:09:28.989294   10618 cni.go:84] Creating CNI manager for ""
	I1008 11:09:28.989328   10618 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:09:28.989371   10618 start.go:340] cluster config:
	{Name:default-k8s-diff-port-186000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:default-k8s-diff-port-186000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:09:28.993948   10618 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:09:29.001725   10618 out.go:177] * Starting "default-k8s-diff-port-186000" primary control-plane node in "default-k8s-diff-port-186000" cluster
	I1008 11:09:29.005787   10618 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:09:29.005800   10618 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:09:29.005808   10618 cache.go:56] Caching tarball of preloaded images
	I1008 11:09:29.005876   10618 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:09:29.005882   10618 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:09:29.005939   10618 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/default-k8s-diff-port-186000/config.json ...
	I1008 11:09:29.006401   10618 start.go:360] acquireMachinesLock for default-k8s-diff-port-186000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:09:29.006438   10618 start.go:364] duration metric: took 29.375µs to acquireMachinesLock for "default-k8s-diff-port-186000"
	I1008 11:09:29.006447   10618 start.go:96] Skipping create...Using existing machine configuration
	I1008 11:09:29.006451   10618 fix.go:54] fixHost starting: 
	I1008 11:09:29.006570   10618 fix.go:112] recreateIfNeeded on default-k8s-diff-port-186000: state=Stopped err=<nil>
	W1008 11:09:29.006580   10618 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 11:09:29.010747   10618 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-186000" ...
	I1008 11:09:29.018731   10618 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:09:29.018763   10618 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:b8:4c:25:7a:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/disk.qcow2
	I1008 11:09:29.020977   10618 main.go:141] libmachine: STDOUT: 
	I1008 11:09:29.021004   10618 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:09:29.021032   10618 fix.go:56] duration metric: took 14.578542ms for fixHost
	I1008 11:09:29.021036   10618 start.go:83] releasing machines lock for "default-k8s-diff-port-186000", held for 14.593334ms
	W1008 11:09:29.021044   10618 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:09:29.021077   10618 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:09:29.021082   10618 start.go:729] Will try again in 5 seconds ...
	I1008 11:09:34.023133   10618 start.go:360] acquireMachinesLock for default-k8s-diff-port-186000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:09:34.023541   10618 start.go:364] duration metric: took 338.125µs to acquireMachinesLock for "default-k8s-diff-port-186000"
	I1008 11:09:34.023661   10618 start.go:96] Skipping create...Using existing machine configuration
	I1008 11:09:34.023682   10618 fix.go:54] fixHost starting: 
	I1008 11:09:34.024392   10618 fix.go:112] recreateIfNeeded on default-k8s-diff-port-186000: state=Stopped err=<nil>
	W1008 11:09:34.024417   10618 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 11:09:34.032913   10618 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-186000" ...
	I1008 11:09:34.036973   10618 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:09:34.037131   10618 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:b8:4c:25:7a:92 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/default-k8s-diff-port-186000/disk.qcow2
	I1008 11:09:34.047468   10618 main.go:141] libmachine: STDOUT: 
	I1008 11:09:34.047532   10618 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:09:34.047608   10618 fix.go:56] duration metric: took 23.932417ms for fixHost
	I1008 11:09:34.047625   10618 start.go:83] releasing machines lock for "default-k8s-diff-port-186000", held for 24.063792ms
	W1008 11:09:34.047818   10618 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-186000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-186000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:09:34.055930   10618 out.go:201] 
	W1008 11:09:34.060063   10618 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:09:34.060101   10618 out.go:270] * 
	* 
	W1008 11:09:34.062611   10618 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:09:34.070982   10618 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-186000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000: exit status 7 (71.829958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-186000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-197000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-197000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1: exit status 80 (5.189690459s)

                                                
                                                
-- stdout --
	* [newest-cni-197000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-197000" primary control-plane node in "newest-cni-197000" cluster
	* Restarting existing qemu2 VM for "newest-cni-197000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-197000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:09:31.055993   10637 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:09:31.056162   10637 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:31.056165   10637 out.go:358] Setting ErrFile to fd 2...
	I1008 11:09:31.056168   10637 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:31.056310   10637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:09:31.057384   10637 out.go:352] Setting JSON to false
	I1008 11:09:31.074939   10637 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":5941,"bootTime":1728405030,"procs":565,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 11:09:31.075005   10637 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 11:09:31.080190   10637 out.go:177] * [newest-cni-197000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 11:09:31.087140   10637 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 11:09:31.087202   10637 notify.go:220] Checking for updates...
	I1008 11:09:31.092164   10637 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 11:09:31.095080   10637 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 11:09:31.098123   10637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 11:09:31.101105   10637 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 11:09:31.104082   10637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 11:09:31.107447   10637 config.go:182] Loaded profile config "newest-cni-197000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:09:31.107731   10637 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 11:09:31.112163   10637 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 11:09:31.119101   10637 start.go:297] selected driver: qemu2
	I1008 11:09:31.119109   10637 start.go:901] validating driver "qemu2" against &{Name:newest-cni-197000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:newest-cni-197000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] Lis
tenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:09:31.119173   10637 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 11:09:31.121705   10637 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1008 11:09:31.121732   10637 cni.go:84] Creating CNI manager for ""
	I1008 11:09:31.121753   10637 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 11:09:31.121782   10637 start.go:340] cluster config:
	{Name:newest-cni-197000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:newest-cni-197000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 11:09:31.126171   10637 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 11:09:31.134062   10637 out.go:177] * Starting "newest-cni-197000" primary control-plane node in "newest-cni-197000" cluster
	I1008 11:09:31.138147   10637 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 11:09:31.138171   10637 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 11:09:31.138182   10637 cache.go:56] Caching tarball of preloaded images
	I1008 11:09:31.138247   10637 preload.go:172] Found /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I1008 11:09:31.138254   10637 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I1008 11:09:31.138332   10637 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/newest-cni-197000/config.json ...
	I1008 11:09:31.138806   10637 start.go:360] acquireMachinesLock for newest-cni-197000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:09:31.138839   10637 start.go:364] duration metric: took 26.917µs to acquireMachinesLock for "newest-cni-197000"
	I1008 11:09:31.138847   10637 start.go:96] Skipping create...Using existing machine configuration
	I1008 11:09:31.138851   10637 fix.go:54] fixHost starting: 
	I1008 11:09:31.138973   10637 fix.go:112] recreateIfNeeded on newest-cni-197000: state=Stopped err=<nil>
	W1008 11:09:31.138983   10637 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 11:09:31.143152   10637 out.go:177] * Restarting existing qemu2 VM for "newest-cni-197000" ...
	I1008 11:09:31.151105   10637 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:09:31.151157   10637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:df:72:b0:45:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/disk.qcow2
	I1008 11:09:31.153389   10637 main.go:141] libmachine: STDOUT: 
	I1008 11:09:31.153410   10637 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:09:31.153438   10637 fix.go:56] duration metric: took 14.58375ms for fixHost
	I1008 11:09:31.153444   10637 start.go:83] releasing machines lock for "newest-cni-197000", held for 14.60075ms
	W1008 11:09:31.153451   10637 start.go:714] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:09:31.153495   10637 out.go:270] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:09:31.153500   10637 start.go:729] Will try again in 5 seconds ...
	I1008 11:09:36.155647   10637 start.go:360] acquireMachinesLock for newest-cni-197000: {Name:mkc7cac7deb056aa375e2bc5fb864c58d336ddda Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1008 11:09:36.156167   10637 start.go:364] duration metric: took 420.416µs to acquireMachinesLock for "newest-cni-197000"
	I1008 11:09:36.156329   10637 start.go:96] Skipping create...Using existing machine configuration
	I1008 11:09:36.156355   10637 fix.go:54] fixHost starting: 
	I1008 11:09:36.157121   10637 fix.go:112] recreateIfNeeded on newest-cni-197000: state=Stopped err=<nil>
	W1008 11:09:36.157148   10637 fix.go:138] unexpected machine state, will restart: <nil>
	I1008 11:09:36.164584   10637 out.go:177] * Restarting existing qemu2 VM for "newest-cni-197000" ...
	I1008 11:09:36.168547   10637 qemu.go:418] Using hvf for hardware acceleration
	I1008 11:09:36.168806   10637 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:df:72:b0:45:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/19774-6384/.minikube/machines/newest-cni-197000/disk.qcow2
	I1008 11:09:36.179014   10637 main.go:141] libmachine: STDOUT: 
	I1008 11:09:36.179078   10637 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I1008 11:09:36.179166   10637 fix.go:56] duration metric: took 22.813792ms for fixHost
	I1008 11:09:36.179184   10637 start.go:83] releasing machines lock for "newest-cni-197000", held for 22.993125ms
	W1008 11:09:36.179335   10637 out.go:270] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-197000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-197000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I1008 11:09:36.187539   10637 out.go:201] 
	W1008 11:09:36.190617   10637 out.go:270] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W1008 11:09:36.190743   10637 out.go:270] * 
	* 
	W1008 11:09:36.192571   10637 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 11:09:36.204648   10637 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-197000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.31.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000: exit status 7 (72.199958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-197000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-186000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000: exit status 7 (34.350166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-186000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-186000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-186000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-186000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (27.825292ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-186000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-186000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000: exit status 7 (33.658792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-186000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-186000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000: exit status 7 (32.775084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-186000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-186000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-186000 --alsologtostderr -v=1: exit status 83 (45.709583ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-186000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-186000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:09:34.363334   10656 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:09:34.363538   10656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:34.363541   10656 out.go:358] Setting ErrFile to fd 2...
	I1008 11:09:34.363544   10656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:34.363681   10656 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:09:34.363903   10656 out.go:352] Setting JSON to false
	I1008 11:09:34.363910   10656 mustload.go:65] Loading cluster: default-k8s-diff-port-186000
	I1008 11:09:34.364130   10656 config.go:182] Loaded profile config "default-k8s-diff-port-186000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:09:34.368389   10656 out.go:177] * The control-plane node default-k8s-diff-port-186000 host is not running: state=Stopped
	I1008 11:09:34.372376   10656 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-186000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-186000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000: exit status 7 (33.083083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-186000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000: exit status 7 (33.363708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-186000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-197000 image list --format=json
start_stop_delete_test.go:304: v1.31.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.3",
- 	"registry.k8s.io/etcd:3.5.15-0",
- 	"registry.k8s.io/kube-apiserver:v1.31.1",
- 	"registry.k8s.io/kube-controller-manager:v1.31.1",
- 	"registry.k8s.io/kube-proxy:v1.31.1",
- 	"registry.k8s.io/kube-scheduler:v1.31.1",
- 	"registry.k8s.io/pause:3.10",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000: exit status 7 (33.431292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-197000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-197000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-197000 --alsologtostderr -v=1: exit status 83 (45.969291ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-197000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-197000"

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 11:09:36.400183   10680 out.go:345] Setting OutFile to fd 1 ...
	I1008 11:09:36.400383   10680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:36.400386   10680 out.go:358] Setting ErrFile to fd 2...
	I1008 11:09:36.400389   10680 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 11:09:36.400507   10680 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 11:09:36.400748   10680 out.go:352] Setting JSON to false
	I1008 11:09:36.400754   10680 mustload.go:65] Loading cluster: newest-cni-197000
	I1008 11:09:36.401000   10680 config.go:182] Loaded profile config "newest-cni-197000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 11:09:36.404490   10680 out.go:177] * The control-plane node newest-cni-197000 host is not running: state=Stopped
	I1008 11:09:36.408480   10680 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-197000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-197000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000: exit status 7 (33.701583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-197000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000: exit status 7 (34.258166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-197000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (80/258)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.31.1/json-events 18.66
13 TestDownloadOnly/v1.31.1/preload-exists 0
16 TestDownloadOnly/v1.31.1/kubectl 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.11
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.29
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
35 TestHyperKitDriverInstallOrUpdate 10.49
39 TestErrorSpam/start 0.37
40 TestErrorSpam/status 0.1
41 TestErrorSpam/pause 0.13
42 TestErrorSpam/unpause 0.12
43 TestErrorSpam/stop 10.39
46 TestFunctional/serial/CopySyncFile 0
48 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/CacheCmd/cache/add_remote 1.93
55 TestFunctional/serial/CacheCmd/cache/add_local 1.04
56 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
57 TestFunctional/serial/CacheCmd/cache/list 0.04
60 TestFunctional/serial/CacheCmd/cache/delete 0.08
69 TestFunctional/parallel/ConfigCmd 0.25
71 TestFunctional/parallel/DryRun 0.29
72 TestFunctional/parallel/InternationalLanguage 0.12
78 TestFunctional/parallel/AddonsCmd 0.1
93 TestFunctional/parallel/License 1.43
94 TestFunctional/parallel/Version/short 0.04
101 TestFunctional/parallel/ImageCommands/Setup 1.66
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
122 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
124 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.08
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.11
126 TestFunctional/parallel/ProfileCmd/profile_list 0.09
127 TestFunctional/parallel/ProfileCmd/profile_json_output 0.09
132 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.05
134 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.17
135 TestFunctional/delete_echo-server_images 0.05
136 TestFunctional/delete_my-image_image 0.02
137 TestFunctional/delete_minikube_cached_images 0.02
166 TestJSONOutput/start/Audit 0
168 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/pause/Audit 0
174 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/unpause/Audit 0
180 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
183 TestJSONOutput/stop/Command 3.05
184 TestJSONOutput/stop/Audit 0
186 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
188 TestErrorJSONOutput 0.21
193 TestMainNoArgs 0.04
238 TestStoppedBinaryUpgrade/Setup 4.59
240 TestStoppedBinaryUpgrade/MinikubeLogs 0.88
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
258 TestNoKubernetes/serial/ProfileList 0.11
259 TestNoKubernetes/serial/Stop 3.17
261 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
275 TestStartStop/group/old-k8s-version/serial/Stop 3.56
276 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.13
286 TestStartStop/group/no-preload/serial/Stop 3.12
287 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
297 TestStartStop/group/embed-certs/serial/Stop 2.02
298 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
310 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.53
311 TestStartStop/group/newest-cni/serial/DeployApp 0
312 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
313 TestStartStop/group/newest-cni/serial/Stop 3.11
314 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
316 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
322 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
323 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1008 10:42:51.294174    6907 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I1008 10:42:51.294549    6907 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-430000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-430000: exit status 85 (97.627208ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-430000 | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT |          |
	|         | -p download-only-430000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 10:42:09
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 10:42:09.511836    6908 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:42:09.512027    6908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:42:09.512030    6908 out.go:358] Setting ErrFile to fd 2...
	I1008 10:42:09.512033    6908 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:42:09.512150    6908 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	W1008 10:42:09.512222    6908 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19774-6384/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19774-6384/.minikube/config/config.json: no such file or directory
	I1008 10:42:09.513677    6908 out.go:352] Setting JSON to true
	I1008 10:42:09.531464    6908 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4299,"bootTime":1728405030,"procs":565,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:42:09.531530    6908 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:42:09.536986    6908 out.go:97] [download-only-430000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:42:09.537110    6908 notify.go:220] Checking for updates...
	W1008 10:42:09.537171    6908 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball: no such file or directory
	I1008 10:42:09.540956    6908 out.go:169] MINIKUBE_LOCATION=19774
	I1008 10:42:09.543910    6908 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:42:09.547964    6908 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:42:09.550989    6908 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:42:09.553886    6908 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	W1008 10:42:09.559954    6908 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1008 10:42:09.560147    6908 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:42:09.562973    6908 out.go:97] Using the qemu2 driver based on user configuration
	I1008 10:42:09.562991    6908 start.go:297] selected driver: qemu2
	I1008 10:42:09.563016    6908 start.go:901] validating driver "qemu2" against <nil>
	I1008 10:42:09.563080    6908 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 10:42:09.565917    6908 out.go:169] Automatically selected the socket_vmnet network
	I1008 10:42:09.571419    6908 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1008 10:42:09.571505    6908 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 10:42:09.571543    6908 cni.go:84] Creating CNI manager for ""
	I1008 10:42:09.571579    6908 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1008 10:42:09.571638    6908 start.go:340] cluster config:
	{Name:download-only-430000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-430000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:42:09.576060    6908 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:42:09.580893    6908 out.go:97] Downloading VM boot image ...
	I1008 10:42:09.580921    6908 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/iso/arm64/minikube-v1.34.0-1728382514-19774-arm64.iso
	I1008 10:42:31.422033    6908 out.go:97] Starting "download-only-430000" primary control-plane node in "download-only-430000" cluster
	I1008 10:42:31.422053    6908 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1008 10:42:31.719337    6908 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1008 10:42:31.719381    6908 cache.go:56] Caching tarball of preloaded images
	I1008 10:42:31.720257    6908 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1008 10:42:31.725247    6908 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1008 10:42:31.725286    6908 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1008 10:42:32.302841    6908 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I1008 10:42:49.961301    6908 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1008 10:42:49.961487    6908 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I1008 10:42:50.654301    6908 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I1008 10:42:50.654517    6908 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/download-only-430000/config.json ...
	I1008 10:42:50.654535    6908 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19774-6384/.minikube/profiles/download-only-430000/config.json: {Name:mkb474b260b53c66663610ddbd7d258188150971 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1008 10:42:50.654777    6908 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I1008 10:42:50.655001    6908 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I1008 10:42:51.244836    6908 out.go:193] 
	W1008 10:42:51.249974    6908 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/19774-6384/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x1093fcfa0 0x1093fcfa0 0x1093fcfa0 0x1093fcfa0 0x1093fcfa0 0x1093fcfa0 0x1093fcfa0] Decompressors:map[bz2:0x1400078ab20 gz:0x1400078ab28 tar:0x1400078aad0 tar.bz2:0x1400078aae0 tar.gz:0x1400078aaf0 tar.xz:0x1400078ab00 tar.zst:0x1400078ab10 tbz2:0x1400078aae0 tgz:0x1400078aaf0 txz:0x1400078ab00 tzst:0x1400078ab10 xz:0x1400078ab40 zip:0x1400078ab60 zst:0x1400078ab48] Getters:map[file:0x140001857a0 http:0x140000dae60 https:0x140000dafa0] Dir:false ProgressList
ener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W1008 10:42:51.250009    6908 out_reason.go:110] 
	W1008 10:42:51.257782    6908 out.go:283] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I1008 10:42:51.260773    6908 out.go:193] 
	
	
	* The control-plane node download-only-430000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-430000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-430000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (18.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-500000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-500000 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=qemu2 : (18.662660375s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (18.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1008 10:43:10.324429    6907 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I1008 10:43:10.324496    6907 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
--- PASS: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-500000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-500000: exit status 85 (81.72425ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-430000 | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT |                     |
	|         | -p download-only-430000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT | 08 Oct 24 10:42 PDT |
	| delete  | -p download-only-430000        | download-only-430000 | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT | 08 Oct 24 10:42 PDT |
	| start   | -o=json --download-only        | download-only-500000 | jenkins | v1.34.0 | 08 Oct 24 10:42 PDT |                     |
	|         | -p download-only-500000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/08 10:42:51
	Running on machine: MacOS-M1-Agent-1
	Binary: Built with gc go1.23.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1008 10:42:51.692228    6937 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:42:51.692387    6937 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:42:51.692391    6937 out.go:358] Setting ErrFile to fd 2...
	I1008 10:42:51.692393    6937 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:42:51.692512    6937 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:42:51.693624    6937 out.go:352] Setting JSON to true
	I1008 10:42:51.711616    6937 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4341,"bootTime":1728405030,"procs":556,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:42:51.711680    6937 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:42:51.716607    6937 out.go:97] [download-only-500000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:42:51.716692    6937 notify.go:220] Checking for updates...
	I1008 10:42:51.720593    6937 out.go:169] MINIKUBE_LOCATION=19774
	I1008 10:42:51.723627    6937 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:42:51.727632    6937 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:42:51.730646    6937 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:42:51.733629    6937 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	W1008 10:42:51.739630    6937 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1008 10:42:51.739778    6937 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:42:51.741229    6937 out.go:97] Using the qemu2 driver based on user configuration
	I1008 10:42:51.741237    6937 start.go:297] selected driver: qemu2
	I1008 10:42:51.741240    6937 start.go:901] validating driver "qemu2" against <nil>
	I1008 10:42:51.741280    6937 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1008 10:42:51.744619    6937 out.go:169] Automatically selected the socket_vmnet network
	I1008 10:42:51.749969    6937 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I1008 10:42:51.750064    6937 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1008 10:42:51.750082    6937 cni.go:84] Creating CNI manager for ""
	I1008 10:42:51.750110    6937 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1008 10:42:51.750116    6937 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1008 10:42:51.750162    6937 start.go:340] cluster config:
	{Name:download-only-500000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-500000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:42:51.754559    6937 iso.go:125] acquiring lock: {Name:mkaf52095b925b4b17d232f7208e3841c46145ba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1008 10:42:51.757653    6937 out.go:97] Starting "download-only-500000" primary control-plane node in "download-only-500000" cluster
	I1008 10:42:51.757660    6937 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 10:42:52.419534    6937 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I1008 10:42:52.419620    6937 cache.go:56] Caching tarball of preloaded images
	I1008 10:42:52.420530    6937 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I1008 10:42:52.426222    6937 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I1008 10:42:52.426266    6937 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I1008 10:42:52.989055    6937 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /Users/jenkins/minikube-integration/19774-6384/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-500000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-500000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-500000
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.29s)

                                                
                                                
=== RUN   TestBinaryMirror
I1008 10:43:10.843223    6907 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/darwin/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-678000 --alsologtostderr --binary-mirror http://127.0.0.1:51023 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-678000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-678000
--- PASS: TestBinaryMirror (0.29s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:934: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-147000
addons_test.go:934: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-147000: exit status 85 (64.168875ms)

                                                
                                                
-- stdout --
	* Profile "addons-147000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-147000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:945: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-147000
addons_test.go:945: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-147000: exit status 85 (60.266667ms)

                                                
                                                
-- stdout --
	* Profile "addons-147000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-147000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (10.49s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
I1008 11:05:46.609088    6907 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1008 11:05:46.609226    6907 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-without-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
W1008 11:05:48.622609    6907 install.go:62] docker-machine-driver-hyperkit: exit status 1
W1008 11:05:48.622813    6907 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1008 11:05:48.622863    6907 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/001/docker-machine-driver-hyperkit
I1008 11:05:49.157869    6907 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 Dst:/var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/001/docker-machine-driver-hyperkit.download Pwd: Mode:2 Umask:---------- Detectors:[0x10979e3c0 0x10979e3c0 0x10979e3c0 0x10979e3c0 0x10979e3c0 0x10979e3c0 0x10979e3c0] Decompressors:map[bz2:0x14000835650 gz:0x14000835658 tar:0x14000835600 tar.bz2:0x14000835610 tar.gz:0x14000835620 tar.xz:0x14000835630 tar.zst:0x14000835640 tbz2:0x14000835610 tgz:0x14000835620 txz:0x14000835630 tzst:0x14000835640 xz:0x14000835660 zip:0x14000835670 zst:0x14000835668] Getters:map[file:0x140015f40c0 http:0x14000c0e410 https:0x14000c0e460] Dir:false ProgressListener:<nil> Insecure:false DisableSy
mlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404. trying to get the common version
I1008 11:05:49.157985    6907 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/001/docker-machine-driver-hyperkit
--- PASS: TestHyperKitDriverInstallOrUpdate (10.49s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 status: exit status 7 (33.945291ms)

                                                
                                                
-- stdout --
	nospam-757000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 status: exit status 7 (32.855291ms)

                                                
                                                
-- stdout --
	nospam-757000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 status: exit status 7 (32.816125ms)

                                                
                                                
-- stdout --
	nospam-757000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 pause: exit status 83 (43.446583ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-757000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-757000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 pause: exit status 83 (41.185625ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-757000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-757000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 pause: exit status 83 (42.54875ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-757000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-757000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.13s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.12s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 unpause: exit status 83 (41.661334ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-757000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-757000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 unpause: exit status 83 (41.989ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-757000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-757000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 unpause: exit status 83 (39.726791ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-757000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-757000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.12s)

                                                
                                    
x
+
TestErrorSpam/stop (10.39s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 stop: (3.400858417s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 stop: (3.681474625s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-757000 --log_dir /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/nospam-757000 stop: (3.306530042s)
--- PASS: TestErrorSpam/stop (10.39s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /Users/jenkins/minikube-integration/19774-6384/.minikube/files/etc/test/nested/copy/6907/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-099000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalserialCacheCmdcacheadd_local631743766/001
functional_test.go:1089: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 cache add minikube-local-cache-test:functional-099000
functional_test.go:1094: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 cache delete minikube-local-cache-test:functional-099000
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-099000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 config get cpus: exit status 14 (35.106167ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 config get cpus: exit status 14 (42.414333ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-099000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-099000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (169.414584ms)

                                                
                                                
-- stdout --
	* [functional-099000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:44:47.932219    7536 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:44:47.932410    7536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:44:47.932415    7536 out.go:358] Setting ErrFile to fd 2...
	I1008 10:44:47.932418    7536 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:44:47.932597    7536 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:44:47.933899    7536 out.go:352] Setting JSON to false
	I1008 10:44:47.953779    7536 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4457,"bootTime":1728405030,"procs":577,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:44:47.953847    7536 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:44:47.957929    7536 out.go:177] * [functional-099000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	I1008 10:44:47.965794    7536 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:44:47.965861    7536 notify.go:220] Checking for updates...
	I1008 10:44:47.972850    7536 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:44:47.975851    7536 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:44:47.978817    7536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:44:47.981820    7536 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:44:47.984809    7536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:44:47.988199    7536 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:44:47.988479    7536 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:44:47.992836    7536 out.go:177] * Using the qemu2 driver based on existing profile
	I1008 10:44:47.999791    7536 start.go:297] selected driver: qemu2
	I1008 10:44:47.999799    7536 start.go:901] validating driver "qemu2" against &{Name:functional-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:44:47.999855    7536 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:44:48.006822    7536 out.go:201] 
	W1008 10:44:48.010856    7536 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1008 10:44:48.014803    7536 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-099000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-099000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-099000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (116.445584ms)

                                                
                                                
-- stdout --
	* [functional-099000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1008 10:44:48.171560    7547 out.go:345] Setting OutFile to fd 1 ...
	I1008 10:44:48.171712    7547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:44:48.171715    7547 out.go:358] Setting ErrFile to fd 2...
	I1008 10:44:48.171717    7547 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1008 10:44:48.171861    7547 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19774-6384/.minikube/bin
	I1008 10:44:48.173424    7547 out.go:352] Setting JSON to false
	I1008 10:44:48.191828    7547 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-1.local","uptime":4458,"bootTime":1728405030,"procs":577,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.0.1","kernelVersion":"24.0.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"5b726c3a-f72c-561b-b03e-814251f12bfa"}
	W1008 10:44:48.191907    7547 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I1008 10:44:48.195926    7547 out.go:177] * [functional-099000] minikube v1.34.0 sur Darwin 15.0.1 (arm64)
	I1008 10:44:48.202800    7547 out.go:177]   - MINIKUBE_LOCATION=19774
	I1008 10:44:48.202855    7547 notify.go:220] Checking for updates...
	I1008 10:44:48.209850    7547 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	I1008 10:44:48.212775    7547 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I1008 10:44:48.215820    7547 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1008 10:44:48.218856    7547 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	I1008 10:44:48.221825    7547 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1008 10:44:48.225148    7547 config.go:182] Loaded profile config "functional-099000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I1008 10:44:48.225414    7547 driver.go:394] Setting default libvirt URI to qemu:///system
	I1008 10:44:48.229852    7547 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I1008 10:44:48.236810    7547 start.go:297] selected driver: qemu2
	I1008 10:44:48.236818    7547 start.go:901] validating driver "qemu2" against &{Name:functional-099000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19774/minikube-v1.34.0-1728382514-19774-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.31.1 ClusterName:functional-099000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1008 10:44:48.236873    7547 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1008 10:44:48.243821    7547 out.go:201] 
	W1008 10:44:48.247839    7547 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1008 10:44:48.251802    7547 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2288: (dbg) Done: out/minikube-darwin-arm64 license: (1.433284334s)
--- PASS: TestFunctional/parallel/License (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.643047417s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-099000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-099000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image rm kicbase/echo-server:functional-099000 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-099000
functional_test.go:424: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 image save --daemon kicbase/echo-server:functional-099000 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-099000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1315: Took "51.852708ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1329: Took "37.877709ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1366: Took "52.004375ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1379: Took "37.765292ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.014428584s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-099000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-099000
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-099000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-099000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (3.05s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-113000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-113000 --output=json --user=testUser: (3.052607042s)
--- PASS: TestJSONOutput/stop/Command (3.05s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-734000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-734000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (98.14525ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9d933998-c4ca-48f9-9b52-eb11646a245d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-734000] minikube v1.34.0 on Darwin 15.0.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b66518a9-a1e4-44da-81f7-b6bd1e08f8bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19774"}}
	{"specversion":"1.0","id":"5f2c154c-46f7-4d0c-92c6-9e67f58d89c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig"}}
	{"specversion":"1.0","id":"1fa94f4e-f812-471a-b75b-6faf77fe3113","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"99cc03c5-6535-4852-b791-48efe8c7a031","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c55e139b-90fd-45e5-8e29-65f6c049cceb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube"}}
	{"specversion":"1.0","id":"78a701c4-ab7a-4767-a3e6-2fe870911d77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0f4729d8-3536-4e0c-bc26-c993094315e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-734000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-734000
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-810000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-490000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-490000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (100.813541ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-490000] minikube v1.34.0 on Darwin 15.0.1 (arm64)
	  - MINIKUBE_LOCATION=19774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19774-6384/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19774-6384/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-490000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-490000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (45.033041ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-490000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-490000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-490000
I1008 11:05:52.057913    6907 install.go:79] stdout: 
W1008 11:05:52.058116    6907 out.go:174] [unset outFile]: * The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/001/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/001/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
I1008 11:05:52.058148    6907 install.go:99] testing: [sudo -n chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/001/docker-machine-driver-hyperkit]
I1008 11:05:52.075192    6907 install.go:106] running: [sudo chown root:wheel /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/001/docker-machine-driver-hyperkit]
I1008 11:05:52.088195    6907 install.go:99] testing: [sudo -n chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/001/docker-machine-driver-hyperkit]
I1008 11:05:52.099347    6907 install.go:106] running: [sudo chmod u+s /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/001/docker-machine-driver-hyperkit]
I1008 11:05:52.120508    6907 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1008 11:05:52.120633    6907 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/workspace/testdata/hyperkit-driver-older-version:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin:/opt/homebrew/bin
I1008 11:05:53.967956    6907 install.go:137] /Users/jenkins/workspace/testdata/hyperkit-driver-older-version/docker-machine-driver-hyperkit version is 1.2.0
W1008 11:05:53.967983    6907 install.go:62] docker-machine-driver-hyperkit: docker-machine-driver-hyperkit is version 1.2.0, want 1.11.0
W1008 11:05:53.968040    6907 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-hyperkit:
I1008 11:05:53.968071    6907 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-hyperkit-arm64.sha256 -> /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestHyperKitDriverInstallOrUpdate1074958771/002/docker-machine-driver-hyperkit
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-490000: (3.167145666s)
--- PASS: TestNoKubernetes/serial/Stop (3.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-490000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-490000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.847334ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-490000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-490000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-919000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-919000 --alsologtostderr -v=3: (3.560154667s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (3.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-919000 -n old-k8s-version-919000: exit status 7 (64.242375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-919000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-528000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-528000 --alsologtostderr -v=3: (3.116118958s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-528000 -n no-preload-528000: exit status 7 (62.0455ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-528000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (2.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-149000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-149000 --alsologtostderr -v=3: (2.022551334s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (2.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-149000 -n embed-certs-149000: exit status 7 (63.35475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-149000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-186000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-186000 --alsologtostderr -v=3: (3.526570042s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-197000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-197000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-197000 --alsologtostderr -v=3: (3.110604541s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-186000 -n default-k8s-diff-port-186000: exit status 7 (59.330958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-186000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-197000 -n newest-cni-197000: exit status 7 (61.752917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-197000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (22/258)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-099000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2089380194/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728409453731124000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2089380194/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728409453731124000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2089380194/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728409453731124000" to /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2089380194/001/test-1728409453731124000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (59.012708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:13.790717    6907 retry.go:31] will retry after 294.293617ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.858916ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:14.174193    6907 retry.go:31] will retry after 791.966853ms: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.981417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:15.057430    6907 retry.go:31] will retry after 1.457418968s: exit status 83
I1008 10:44:15.914888    6907 retry.go:31] will retry after 4.457901805s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.940791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:16.609248    6907 retry.go:31] will retry after 2.287316587s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.485292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:18.990512    6907 retry.go:31] will retry after 1.864254423s: exit status 83
I1008 10:44:20.375845    6907 retry.go:31] will retry after 8.702843783s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.669083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:20.947849    6907 retry.go:31] will retry after 4.687741972s: exit status 83
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.064334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "sudo umount -f /mount-9p": exit status 83 (48.765333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-099000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-099000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdany-port2089380194/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (12.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (11.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-099000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3295813299/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (66.489791ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:25.967197    6907 retry.go:31] will retry after 477.701376ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.2655ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:26.538491    6907 retry.go:31] will retry after 788.64403ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.394125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:27.420861    6907 retry.go:31] will retry after 750.917082ms: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.321625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:28.265472    6907 retry.go:31] will retry after 1.50776111s: exit status 83
I1008 10:44:29.081033    6907 retry.go:31] will retry after 9.608694681s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.746125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:29.865411    6907 retry.go:31] will retry after 2.066409854s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.52575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:32.023686    6907 retry.go:31] will retry after 4.872923426s: exit status 83
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (91.685ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "sudo umount -f /mount-9p": exit status 83 (48.837167ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-099000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-099000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdspecific-port3295813299/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (11.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (10.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-099000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2530382893/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-099000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2530382893/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-099000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2530382893/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T" /mount1: exit status 83 (74.766667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:37.238598    6907 retry.go:31] will retry after 682.501583ms: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T" /mount1: exit status 83 (87.65ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:38.011122    6907 retry.go:31] will retry after 825.273509ms: exit status 83
I1008 10:44:38.692024    6907 retry.go:31] will retry after 21.457289854s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T" /mount1: exit status 83 (91.96675ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:38.929628    6907 retry.go:31] will retry after 1.10019936s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T" /mount1: exit status 83 (93.462458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:40.124787    6907 retry.go:31] will retry after 1.590330587s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T" /mount1: exit status 83 (92.369083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:41.809897    6907 retry.go:31] will retry after 1.382801211s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T" /mount1: exit status 83 (94.24375ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
I1008 10:44:43.289275    6907 retry.go:31] will retry after 4.077880169s: exit status 83
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-099000 ssh "findmnt -T" /mount1: exit status 83 (94.433042ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-099000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-099000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-099000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2530382893/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-099000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2530382893/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-099000 /var/folders/2k/p0kjt1w95hl7b54xjlcc45ph0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2530382893/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (10.70s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-446000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-446000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-446000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-446000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-446000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-446000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-446000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-446000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-446000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-446000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-446000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-446000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-446000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-446000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-446000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-446000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-446000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-446000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-446000"

                                                
                                                
----------------------- debugLogs end: cilium-446000 [took: 2.367482792s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-446000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-446000
--- SKIP: TestNetworkPlugins/group/cilium (2.48s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-124000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-124000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.12s)

                                                
                                    
Copied to clipboard