Test Report: QEMU_macOS 17713

                    
                      e2fcc470acdcfe456dc986c99fd39c61db68fa20:2024-03-20:33674
                    
                

Test fail (156/266)

Order failed test Duration
3 TestDownloadOnly/v1.20.0/json-events 41.19
7 TestDownloadOnly/v1.20.0/kubectl 0
31 TestOffline 10.04
36 TestAddons/Setup 10.29
37 TestCertOptions 10.22
38 TestCertExpiration 198.09
39 TestDockerFlags 12.58
40 TestForceSystemdFlag 10.11
41 TestForceSystemdEnv 10.22
47 TestErrorSpam/setup 9.79
56 TestFunctional/serial/StartWithProxy 9.92
58 TestFunctional/serial/SoftStart 5.27
59 TestFunctional/serial/KubeContext 0.06
60 TestFunctional/serial/KubectlGetPods 0.06
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.04
68 TestFunctional/serial/CacheCmd/cache/cache_reload 0.17
70 TestFunctional/serial/MinikubeKubectlCmd 0.69
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.93
72 TestFunctional/serial/ExtraConfig 5.27
73 TestFunctional/serial/ComponentHealth 0.06
74 TestFunctional/serial/LogsCmd 0.08
75 TestFunctional/serial/LogsFileCmd 0.08
76 TestFunctional/serial/InvalidService 0.03
79 TestFunctional/parallel/DashboardCmd 0.2
82 TestFunctional/parallel/StatusCmd 0.13
86 TestFunctional/parallel/ServiceCmdConnect 0.14
88 TestFunctional/parallel/PersistentVolumeClaim 0.03
90 TestFunctional/parallel/SSHCmd 0.12
91 TestFunctional/parallel/CpCmd 0.28
93 TestFunctional/parallel/FileSync 0.08
94 TestFunctional/parallel/CertSync 0.3
98 TestFunctional/parallel/NodeLabels 0.06
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.04
104 TestFunctional/parallel/Version/components 0.04
105 TestFunctional/parallel/ImageCommands/ImageListShort 0.04
106 TestFunctional/parallel/ImageCommands/ImageListTable 0.04
107 TestFunctional/parallel/ImageCommands/ImageListJson 0.04
108 TestFunctional/parallel/ImageCommands/ImageListYaml 0.04
109 TestFunctional/parallel/ImageCommands/ImageBuild 0.12
111 TestFunctional/parallel/DockerEnv/bash 0.05
112 TestFunctional/parallel/UpdateContextCmd/no_changes 0.05
113 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.04
114 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.04
115 TestFunctional/parallel/ServiceCmd/DeployApp 0.03
116 TestFunctional/parallel/ServiceCmd/List 0.05
117 TestFunctional/parallel/ServiceCmd/JSONOutput 0.04
118 TestFunctional/parallel/ServiceCmd/HTTPS 0.04
119 TestFunctional/parallel/ServiceCmd/Format 0.05
120 TestFunctional/parallel/ServiceCmd/URL 0.04
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.08
125 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 0
126 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 113.29
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.36
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.35
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.69
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.04
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.08
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 15.06
142 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 21.89
150 TestMultiControlPlane/serial/StartCluster 9.88
151 TestMultiControlPlane/serial/DeployApp 116.61
152 TestMultiControlPlane/serial/PingHostFromPods 0.09
153 TestMultiControlPlane/serial/AddWorkerNode 0.08
154 TestMultiControlPlane/serial/NodeLabels 0.06
155 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.11
156 TestMultiControlPlane/serial/CopyFile 0.07
157 TestMultiControlPlane/serial/StopSecondaryNode 0.12
158 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.11
159 TestMultiControlPlane/serial/RestartSecondaryNode 43.74
160 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.11
161 TestMultiControlPlane/serial/RestartClusterKeepsNodes 9.1
162 TestMultiControlPlane/serial/DeleteSecondaryNode 0.11
163 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.11
164 TestMultiControlPlane/serial/StopCluster 3.12
165 TestMultiControlPlane/serial/RestartCluster 5.27
166 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.11
167 TestMultiControlPlane/serial/AddSecondaryNode 0.08
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.11
171 TestImageBuild/serial/Setup 9.89
174 TestJSONOutput/start/Command 9.78
180 TestJSONOutput/pause/Command 0.08
186 TestJSONOutput/unpause/Command 0.05
203 TestMinikubeProfile 10.25
206 TestMountStart/serial/StartWithMountFirst 10.64
209 TestMultiNode/serial/FreshStart2Nodes 10.07
210 TestMultiNode/serial/DeployApp2Nodes 73.3
211 TestMultiNode/serial/PingHostFrom2Pods 0.09
212 TestMultiNode/serial/AddNode 0.08
213 TestMultiNode/serial/MultiNodeLabels 0.06
214 TestMultiNode/serial/ProfileList 0.11
215 TestMultiNode/serial/CopyFile 0.07
216 TestMultiNode/serial/StopNode 0.15
217 TestMultiNode/serial/StartAfterStop 48.03
218 TestMultiNode/serial/RestartKeepsNodes 8.35
219 TestMultiNode/serial/DeleteNode 0.11
220 TestMultiNode/serial/StopMultiNode 3.34
221 TestMultiNode/serial/RestartMultiNode 5.25
222 TestMultiNode/serial/ValidateNameConflict 20.63
226 TestPreload 10.13
228 TestScheduledStopUnix 10.04
229 TestSkaffold 17.41
232 TestRunningBinaryUpgrade 661.56
234 TestKubernetesUpgrade 18.2
248 TestStoppedBinaryUpgrade/Upgrade 616.23
258 TestPause/serial/Start 9.94
261 TestNoKubernetes/serial/StartWithK8s 10.15
262 TestNoKubernetes/serial/StartWithStopK8s 5.98
263 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 2.6
264 TestNoKubernetes/serial/Start 5.88
265 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 2.51
269 TestNoKubernetes/serial/StartNoArgs 5.96
271 TestNetworkPlugins/group/auto/Start 9.82
272 TestNetworkPlugins/group/kindnet/Start 9.82
273 TestNetworkPlugins/group/flannel/Start 9.93
274 TestNetworkPlugins/group/enable-default-cni/Start 9.87
275 TestNetworkPlugins/group/bridge/Start 9.81
276 TestNetworkPlugins/group/kubenet/Start 10.08
277 TestNetworkPlugins/group/custom-flannel/Start 9.97
278 TestNetworkPlugins/group/calico/Start 9.85
279 TestNetworkPlugins/group/false/Start 9.8
281 TestStartStop/group/old-k8s-version/serial/FirstStart 9.86
282 TestStartStop/group/old-k8s-version/serial/DeployApp 0.09
283 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.12
286 TestStartStop/group/old-k8s-version/serial/SecondStart 5.27
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.03
288 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.06
289 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.07
290 TestStartStop/group/old-k8s-version/serial/Pause 0.11
292 TestStartStop/group/no-preload/serial/FirstStart 9.98
293 TestStartStop/group/no-preload/serial/DeployApp 0.09
294 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.12
297 TestStartStop/group/no-preload/serial/SecondStart 5.26
298 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.03
299 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.06
300 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.07
301 TestStartStop/group/no-preload/serial/Pause 0.11
303 TestStartStop/group/embed-certs/serial/FirstStart 9.89
304 TestStartStop/group/embed-certs/serial/DeployApp 0.09
305 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.12
308 TestStartStop/group/embed-certs/serial/SecondStart 5.83
310 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 9.87
311 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.04
312 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.06
313 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.07
314 TestStartStop/group/embed-certs/serial/Pause 0.11
316 TestStartStop/group/newest-cni/serial/FirstStart 9.9
317 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.09
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.12
324 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 5.26
326 TestStartStop/group/newest-cni/serial/SecondStart 5.27
327 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.03
328 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.06
329 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.07
330 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.11
333 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.08
334 TestStartStop/group/newest-cni/serial/Pause 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (41.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-293000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-293000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=qemu2 : exit status 40 (41.184628291s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"35b6c453-c8f5-45c0-ba6c-aae38317cdf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[download-only-293000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1890660f-aaa9-464d-a864-4beaa64ddcc4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17713"}}
	{"specversion":"1.0","id":"3877d0ac-ccfd-41df-ac75-02027fbf8b4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig"}}
	{"specversion":"1.0","id":"39244ea7-f8db-4d16-989f-b1549bdfc6d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1533c1bc-6194-45bb-b389-c6ded1868ab0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"82d5ae6d-b5dd-4240-a676-fa5e84eba54d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube"}}
	{"specversion":"1.0","id":"9fad52b8-989c-4c90-a3d5-d733f60bfed9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.warning","datacontenttype":"application/json","data":{"message":"minikube skips various validations when --force is supplied; this may lead to unexpected behavior"}}
	{"specversion":"1.0","id":"ce87fe23-2e71-49d4-837c-4f45144413e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"c8daa3b6-c079-4db1-b5d5-abdbe1b93bdc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"6aa70a24-9212-4326-8818-610d642abb61","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Downloading VM boot image ...","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"80db6872-76b8-4467-888a-4d7f8a912b56","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"download-only-293000\" primary control-plane node in \"download-only-293000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6ed2e127-3b8d-475a-b6cd-f56972422cfd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Downloading Kubernetes v1.20.0 preload ...","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"83389a8c-2239-425a-893b-4a519eeae417","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"40","issues":"","message":"Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: \u0026{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108ad7520 0x108ad7520 0x108ad7520 0x108ad7520 0x108ad7520 0x108ad7520 0x108ad7520] Decompressors:map[bz2:0x1400091aea0 gz:0x1400091aea8 tar:0x1400091ae50 tar.bz2:0x1400091ae60 tar.gz:0x1400091ae70 tar.xz:0x1400091ae80 tar.zst:0x1400091ae90 tbz2:0x1400091ae60 tgz:0x1
400091ae70 txz:0x1400091ae80 tzst:0x1400091ae90 xz:0x1400091aeb0 zip:0x1400091aec0 zst:0x1400091aeb8] Getters:map[file:0x1400210ab90 http:0x14000914500 https:0x140009145a0] Dir:false ProgressListener:\u003cnil\u003e Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404","name":"INET_CACHE_KUBECTL","url":""}}
	{"specversion":"1.0","id":"0409320f-cbe2-4e20-98ba-813f0e243b21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:00:13.885054   12667 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:00:13.885208   12667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:00:13.885212   12667 out.go:304] Setting ErrFile to fd 2...
	I0320 17:00:13.885214   12667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:00:13.885342   12667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	W0320 17:00:13.885439   12667 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17713-12248/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17713-12248/.minikube/config/config.json: no such file or directory
	I0320 17:00:13.886688   12667 out.go:298] Setting JSON to true
	I0320 17:00:13.904604   12667 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7183,"bootTime":1710972030,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:00:13.904665   12667 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:00:13.910727   12667 out.go:97] [download-only-293000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:00:13.914624   12667 out.go:169] MINIKUBE_LOCATION=17713
	I0320 17:00:13.910862   12667 notify.go:220] Checking for updates...
	W0320 17:00:13.910898   12667 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball: no such file or directory
	I0320 17:00:13.922685   12667 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:00:13.930741   12667 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:00:13.933646   12667 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:00:13.936610   12667 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	W0320 17:00:13.942628   12667 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0320 17:00:13.942851   12667 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:00:13.945637   12667 out.go:97] Using the qemu2 driver based on user configuration
	I0320 17:00:13.945659   12667 start.go:297] selected driver: qemu2
	I0320 17:00:13.945676   12667 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:00:13.945763   12667 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:00:13.948616   12667 out.go:169] Automatically selected the socket_vmnet network
	I0320 17:00:13.953883   12667 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0320 17:00:13.953988   12667 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0320 17:00:13.954074   12667 cni.go:84] Creating CNI manager for ""
	I0320 17:00:13.954093   12667 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0320 17:00:13.954157   12667 start.go:340] cluster config:
	{Name:download-only-293000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:00:13.959395   12667 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:00:13.962692   12667 out.go:97] Downloading VM boot image ...
	I0320 17:00:13.962723   12667 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso
	I0320 17:00:32.982565   12667 out.go:97] Starting "download-only-293000" primary control-plane node in "download-only-293000" cluster
	I0320 17:00:32.982606   12667 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0320 17:00:33.273944   12667 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0320 17:00:33.273996   12667 cache.go:56] Caching tarball of preloaded images
	I0320 17:00:33.274788   12667 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0320 17:00:33.282876   12667 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0320 17:00:33.282930   12667 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0320 17:00:33.872628   12667 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0320 17:00:53.933714   12667 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0320 17:00:53.933897   12667 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0320 17:00:54.633819   12667 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0320 17:00:54.634022   12667 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/download-only-293000/config.json ...
	I0320 17:00:54.634042   12667 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/download-only-293000/config.json: {Name:mk952b7b9982969111b4abe857bd1b66fc41d4c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:00:54.634307   12667 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0320 17:00:54.634504   12667 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0320 17:00:54.987001   12667 out.go:169] 
	W0320 17:00:54.993212   12667 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108ad7520 0x108ad7520 0x108ad7520 0x108ad7520 0x108ad7520 0x108ad7520 0x108ad7520] Decompressors:map[bz2:0x1400091aea0 gz:0x1400091aea8 tar:0x1400091ae50 tar.bz2:0x1400091ae60 tar.gz:0x1400091ae70 tar.xz:0x1400091ae80 tar.zst:0x1400091ae90 tbz2:0x1400091ae60 tgz:0x1400091ae70 txz:0x1400091ae80 tzst:0x1400091ae90 xz:0x1400091aeb0 zip:0x1400091aec0 zst:0x1400091aeb8] Getters:map[file:0x1400210ab90 http:0x14000914500 https:0x140009145a0] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0320 17:00:54.993236   12667 out_reason.go:110] 
	W0320 17:00:55.001028   12667 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:00:55.004100   12667 out.go:169] 

                                                
                                                
** /stderr **
aaa_download_only_test.go:83: failed to download only. args: ["start" "-o=json" "--download-only" "-p" "download-only-293000" "--force" "--alsologtostderr" "--kubernetes-version=v1.20.0" "--container-runtime=docker" "--driver=qemu2" ""] exit status 40
--- FAIL: TestDownloadOnly/v1.20.0/json-events (41.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:175: expected the file for binary exist at "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/darwin/arm64/v1.20.0/kubectl" but got error stat /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/darwin/arm64/v1.20.0/kubectl: no such file or directory
--- FAIL: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestOffline (10.04s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-arm64 start -p offline-docker-761000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 
aab_offline_test.go:55: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p offline-docker-761000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2 : exit status 80 (9.853814375s)

                                                
                                                
-- stdout --
	* [offline-docker-761000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "offline-docker-761000" primary control-plane node in "offline-docker-761000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "offline-docker-761000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:12:53.869691   14283 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:12:53.869832   14283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:12:53.869838   14283 out.go:304] Setting ErrFile to fd 2...
	I0320 17:12:53.869841   14283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:12:53.869964   14283 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:12:53.871214   14283 out.go:298] Setting JSON to false
	I0320 17:12:53.889329   14283 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7943,"bootTime":1710972030,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:12:53.889429   14283 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:12:53.894565   14283 out.go:177] * [offline-docker-761000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:12:53.898635   14283 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:12:53.898699   14283 notify.go:220] Checking for updates...
	I0320 17:12:53.905523   14283 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:12:53.906742   14283 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:12:53.909502   14283 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:12:53.912574   14283 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:12:53.915608   14283 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:12:53.918901   14283 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:12:53.918957   14283 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:12:53.922505   14283 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:12:53.929495   14283 start.go:297] selected driver: qemu2
	I0320 17:12:53.929512   14283 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:12:53.929519   14283 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:12:53.931468   14283 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:12:53.934478   14283 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:12:53.938619   14283 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:12:53.938657   14283 cni.go:84] Creating CNI manager for ""
	I0320 17:12:53.938663   14283 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:12:53.938667   14283 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0320 17:12:53.938696   14283 start.go:340] cluster config:
	{Name:offline-docker-761000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bi
n/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:12:53.943400   14283 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:12:53.951530   14283 out.go:177] * Starting "offline-docker-761000" primary control-plane node in "offline-docker-761000" cluster
	I0320 17:12:53.955517   14283 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:12:53.955551   14283 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:12:53.955558   14283 cache.go:56] Caching tarball of preloaded images
	I0320 17:12:53.955632   14283 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:12:53.955638   14283 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:12:53.955696   14283 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/offline-docker-761000/config.json ...
	I0320 17:12:53.955706   14283 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/offline-docker-761000/config.json: {Name:mk7442747c50e29e7c957507f68a315200492650 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:12:53.956016   14283 start.go:360] acquireMachinesLock for offline-docker-761000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:12:53.956050   14283 start.go:364] duration metric: took 23.5µs to acquireMachinesLock for "offline-docker-761000"
	I0320 17:12:53.956062   14283 start.go:93] Provisioning new machine with config: &{Name:offline-docker-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:12:53.956114   14283 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:12:53.964539   14283 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0320 17:12:53.979909   14283 start.go:159] libmachine.API.Create for "offline-docker-761000" (driver="qemu2")
	I0320 17:12:53.979944   14283 client.go:168] LocalClient.Create starting
	I0320 17:12:53.980030   14283 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:12:53.980061   14283 main.go:141] libmachine: Decoding PEM data...
	I0320 17:12:53.980070   14283 main.go:141] libmachine: Parsing certificate...
	I0320 17:12:53.980120   14283 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:12:53.980150   14283 main.go:141] libmachine: Decoding PEM data...
	I0320 17:12:53.980164   14283 main.go:141] libmachine: Parsing certificate...
	I0320 17:12:53.980534   14283 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:12:54.126059   14283 main.go:141] libmachine: Creating SSH key...
	I0320 17:12:54.177462   14283 main.go:141] libmachine: Creating Disk image...
	I0320 17:12:54.177472   14283 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:12:54.177694   14283 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/disk.qcow2
	I0320 17:12:54.191559   14283 main.go:141] libmachine: STDOUT: 
	I0320 17:12:54.191585   14283 main.go:141] libmachine: STDERR: 
	I0320 17:12:54.191679   14283 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/disk.qcow2 +20000M
	I0320 17:12:54.208066   14283 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:12:54.208094   14283 main.go:141] libmachine: STDERR: 
	I0320 17:12:54.208114   14283 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/disk.qcow2
	I0320 17:12:54.208120   14283 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:12:54.208153   14283 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2a:b7:ea:88:d8:8f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/disk.qcow2
	I0320 17:12:54.209785   14283 main.go:141] libmachine: STDOUT: 
	I0320 17:12:54.209801   14283 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:12:54.209824   14283 client.go:171] duration metric: took 229.884584ms to LocalClient.Create
	I0320 17:12:56.211420   14283 start.go:128] duration metric: took 2.255387667s to createHost
	I0320 17:12:56.211437   14283 start.go:83] releasing machines lock for "offline-docker-761000", held for 2.255470667s
	W0320 17:12:56.211455   14283 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:12:56.219398   14283 out.go:177] * Deleting "offline-docker-761000" in qemu2 ...
	W0320 17:12:56.227378   14283 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:12:56.227392   14283 start.go:728] Will try again in 5 seconds ...
	I0320 17:13:01.229375   14283 start.go:360] acquireMachinesLock for offline-docker-761000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:13:01.229831   14283 start.go:364] duration metric: took 360.166µs to acquireMachinesLock for "offline-docker-761000"
	I0320 17:13:01.229982   14283 start.go:93] Provisioning new machine with config: &{Name:offline-docker-761000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:offline-docker-761000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:13:01.230228   14283 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:13:01.240835   14283 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0320 17:13:01.290538   14283 start.go:159] libmachine.API.Create for "offline-docker-761000" (driver="qemu2")
	I0320 17:13:01.290588   14283 client.go:168] LocalClient.Create starting
	I0320 17:13:01.290715   14283 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:13:01.290776   14283 main.go:141] libmachine: Decoding PEM data...
	I0320 17:13:01.290790   14283 main.go:141] libmachine: Parsing certificate...
	I0320 17:13:01.290855   14283 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:13:01.290902   14283 main.go:141] libmachine: Decoding PEM data...
	I0320 17:13:01.290913   14283 main.go:141] libmachine: Parsing certificate...
	I0320 17:13:01.291613   14283 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:13:01.453649   14283 main.go:141] libmachine: Creating SSH key...
	I0320 17:13:01.615143   14283 main.go:141] libmachine: Creating Disk image...
	I0320 17:13:01.615153   14283 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:13:01.615352   14283 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/disk.qcow2
	I0320 17:13:01.627666   14283 main.go:141] libmachine: STDOUT: 
	I0320 17:13:01.627693   14283 main.go:141] libmachine: STDERR: 
	I0320 17:13:01.627763   14283 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/disk.qcow2 +20000M
	I0320 17:13:01.638461   14283 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:13:01.638478   14283 main.go:141] libmachine: STDERR: 
	I0320 17:13:01.638494   14283 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/disk.qcow2
	I0320 17:13:01.638500   14283 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:13:01.638537   14283 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:5c:c2:a9:7b:a1 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/offline-docker-761000/disk.qcow2
	I0320 17:13:01.640172   14283 main.go:141] libmachine: STDOUT: 
	I0320 17:13:01.640186   14283 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:13:01.640201   14283 client.go:171] duration metric: took 349.62075ms to LocalClient.Create
	I0320 17:13:03.642296   14283 start.go:128] duration metric: took 2.412129625s to createHost
	I0320 17:13:03.642356   14283 start.go:83] releasing machines lock for "offline-docker-761000", held for 2.412595167s
	W0320 17:13:03.642782   14283 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p offline-docker-761000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p offline-docker-761000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:13:03.653514   14283 out.go:177] 
	W0320 17:13:03.661592   14283 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:13:03.661658   14283 out.go:239] * 
	* 
	W0320 17:13:03.664266   14283 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:13:03.675317   14283 out.go:177] 

                                                
                                                
** /stderr **
aab_offline_test.go:58: out/minikube-darwin-arm64 start -p offline-docker-761000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=qemu2  failed: exit status 80
panic.go:626: *** TestOffline FAILED at 2024-03-20 17:13:03.692489 -0700 PDT m=+769.917908876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-761000 -n offline-docker-761000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p offline-docker-761000 -n offline-docker-761000: exit status 7 (68.099541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "offline-docker-761000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "offline-docker-761000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p offline-docker-761000
--- FAIL: TestOffline (10.04s)

                                                
                                    
x
+
TestAddons/Setup (10.29s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-arm64 start -p addons-367000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns
addons_test.go:109: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p addons-367000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns: exit status 80 (10.286071958s)

                                                
                                                
-- stdout --
	* [addons-367000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "addons-367000" primary control-plane node in "addons-367000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "addons-367000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:01:48.474853   12843 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:01:48.474983   12843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:01:48.474987   12843 out.go:304] Setting ErrFile to fd 2...
	I0320 17:01:48.474989   12843 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:01:48.475130   12843 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:01:48.476237   12843 out.go:298] Setting JSON to false
	I0320 17:01:48.492635   12843 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7278,"bootTime":1710972030,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:01:48.492693   12843 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:01:48.498002   12843 out.go:177] * [addons-367000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:01:48.505053   12843 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:01:48.505081   12843 notify.go:220] Checking for updates...
	I0320 17:01:48.508005   12843 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:01:48.511999   12843 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:01:48.515896   12843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:01:48.518960   12843 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:01:48.522024   12843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:01:48.525178   12843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:01:48.528996   12843 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:01:48.535983   12843 start.go:297] selected driver: qemu2
	I0320 17:01:48.535988   12843 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:01:48.535994   12843 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:01:48.538250   12843 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:01:48.542085   12843 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:01:48.545057   12843 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:01:48.545099   12843 cni.go:84] Creating CNI manager for ""
	I0320 17:01:48.545106   12843 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:01:48.545110   12843 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0320 17:01:48.545155   12843 start.go:340] cluster config:
	{Name:addons-367000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:addons-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_c
lient SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:01:48.549643   12843 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:01:48.558017   12843 out.go:177] * Starting "addons-367000" primary control-plane node in "addons-367000" cluster
	I0320 17:01:48.560899   12843 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:01:48.560914   12843 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:01:48.560922   12843 cache.go:56] Caching tarball of preloaded images
	I0320 17:01:48.560977   12843 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:01:48.560983   12843 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:01:48.561218   12843 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/addons-367000/config.json ...
	I0320 17:01:48.561229   12843 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/addons-367000/config.json: {Name:mk10d982fffa75e282fdb05c523706478b5c1b49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:01:48.561628   12843 start.go:360] acquireMachinesLock for addons-367000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:01:48.561784   12843 start.go:364] duration metric: took 148.416µs to acquireMachinesLock for "addons-367000"
	I0320 17:01:48.561797   12843 start.go:93] Provisioning new machine with config: &{Name:addons-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:addons-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:01:48.561833   12843 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:01:48.569965   12843 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0320 17:01:48.587378   12843 start.go:159] libmachine.API.Create for "addons-367000" (driver="qemu2")
	I0320 17:01:48.587407   12843 client.go:168] LocalClient.Create starting
	I0320 17:01:48.587536   12843 main.go:141] libmachine: Creating CA: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:01:48.711536   12843 main.go:141] libmachine: Creating client certificate: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:01:48.819293   12843 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:01:49.065144   12843 main.go:141] libmachine: Creating SSH key...
	I0320 17:01:49.152085   12843 main.go:141] libmachine: Creating Disk image...
	I0320 17:01:49.152091   12843 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:01:49.152280   12843 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/disk.qcow2
	I0320 17:01:49.164869   12843 main.go:141] libmachine: STDOUT: 
	I0320 17:01:49.164893   12843 main.go:141] libmachine: STDERR: 
	I0320 17:01:49.164958   12843 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/disk.qcow2 +20000M
	I0320 17:01:49.175666   12843 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:01:49.175685   12843 main.go:141] libmachine: STDERR: 
	I0320 17:01:49.175699   12843 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/disk.qcow2
	I0320 17:01:49.175703   12843 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:01:49.175729   12843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:88:7e:dc:dc:23 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/disk.qcow2
	I0320 17:01:49.177501   12843 main.go:141] libmachine: STDOUT: 
	I0320 17:01:49.177525   12843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:01:49.177544   12843 client.go:171] duration metric: took 590.155208ms to LocalClient.Create
	I0320 17:01:51.179743   12843 start.go:128] duration metric: took 2.617999208s to createHost
	I0320 17:01:51.179879   12843 start.go:83] releasing machines lock for "addons-367000", held for 2.618190708s
	W0320 17:01:51.179956   12843 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:01:51.187188   12843 out.go:177] * Deleting "addons-367000" in qemu2 ...
	W0320 17:01:51.212599   12843 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:01:51.212633   12843 start.go:728] Will try again in 5 seconds ...
	I0320 17:01:56.213867   12843 start.go:360] acquireMachinesLock for addons-367000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:01:56.214245   12843 start.go:364] duration metric: took 306.833µs to acquireMachinesLock for "addons-367000"
	I0320 17:01:56.214357   12843 start.go:93] Provisioning new machine with config: &{Name:addons-367000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:addons-367000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:01:56.214616   12843 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:01:56.219299   12843 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0320 17:01:56.267919   12843 start.go:159] libmachine.API.Create for "addons-367000" (driver="qemu2")
	I0320 17:01:56.267992   12843 client.go:168] LocalClient.Create starting
	I0320 17:01:56.268101   12843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:01:56.268150   12843 main.go:141] libmachine: Decoding PEM data...
	I0320 17:01:56.268168   12843 main.go:141] libmachine: Parsing certificate...
	I0320 17:01:56.268265   12843 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:01:56.268307   12843 main.go:141] libmachine: Decoding PEM data...
	I0320 17:01:56.268322   12843 main.go:141] libmachine: Parsing certificate...
	I0320 17:01:56.268823   12843 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:01:56.425257   12843 main.go:141] libmachine: Creating SSH key...
	I0320 17:01:56.657123   12843 main.go:141] libmachine: Creating Disk image...
	I0320 17:01:56.657130   12843 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:01:56.657383   12843 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/disk.qcow2
	I0320 17:01:56.670637   12843 main.go:141] libmachine: STDOUT: 
	I0320 17:01:56.670657   12843 main.go:141] libmachine: STDERR: 
	I0320 17:01:56.670716   12843 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/disk.qcow2 +20000M
	I0320 17:01:56.681623   12843 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:01:56.681644   12843 main.go:141] libmachine: STDERR: 
	I0320 17:01:56.681660   12843 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/disk.qcow2
	I0320 17:01:56.681664   12843 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:01:56.681698   12843 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:3c:28:ea:b7:ee -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/addons-367000/disk.qcow2
	I0320 17:01:56.683439   12843 main.go:141] libmachine: STDOUT: 
	I0320 17:01:56.683454   12843 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:01:56.683466   12843 client.go:171] duration metric: took 415.485125ms to LocalClient.Create
	I0320 17:01:58.685549   12843 start.go:128] duration metric: took 2.4710015s to createHost
	I0320 17:01:58.685674   12843 start.go:83] releasing machines lock for "addons-367000", held for 2.471466958s
	W0320 17:01:58.686086   12843 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p addons-367000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p addons-367000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:01:58.695381   12843 out.go:177] 
	W0320 17:01:58.702704   12843 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:01:58.702745   12843 out.go:239] * 
	* 
	W0320 17:01:58.705451   12843 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:01:58.715556   12843 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:111: out/minikube-darwin-arm64 start -p addons-367000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=qemu2  --addons=ingress --addons=ingress-dns failed: exit status 80
--- FAIL: TestAddons/Setup (10.29s)

                                                
                                    
x
+
TestCertOptions (10.22s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-options-854000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-options-854000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 : exit status 80 (9.855153208s)

                                                
                                                
-- stdout --
	* [cert-options-854000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-options-854000" primary control-plane node in "cert-options-854000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-options-854000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-options-854000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-options-854000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=qemu2 " : exit status 80
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-arm64 -p cert-options-854000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p cert-options-854000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 83 (83.6915ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-854000"

                                                
                                                
-- /stdout --
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-arm64 -p cert-options-854000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 83
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-854000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters: null\n\tcontexts: null\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers: null\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-arm64 ssh -p cert-options-854000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p cert-options-854000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 83 (41.281667ms)

                                                
                                                
-- stdout --
	* The control-plane node cert-options-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-854000"

                                                
                                                
-- /stdout --
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-arm64 ssh -p cert-options-854000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 83
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	* The control-plane node cert-options-854000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p cert-options-854000"

                                                
                                                
-- /stdout --
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-03-20 17:25:14.077983 -0700 PDT m=+1500.331885126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-854000 -n cert-options-854000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-options-854000 -n cert-options-854000: exit status 7 (31.493084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-options-854000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-options-854000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-options-854000
--- FAIL: TestCertOptions (10.22s)

                                                
                                    
x
+
TestCertExpiration (198.09s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-498000 --memory=2048 --cert-expiration=3m --driver=qemu2 
cert_options_test.go:123: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-498000 --memory=2048 --cert-expiration=3m --driver=qemu2 : exit status 80 (12.658640333s)

                                                
                                                
-- stdout --
	* [cert-expiration-498000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "cert-expiration-498000" primary control-plane node in "cert-expiration-498000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "cert-expiration-498000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-498000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:125: failed to start minikube with args: "out/minikube-darwin-arm64 start -p cert-expiration-498000 --memory=2048 --cert-expiration=3m --driver=qemu2 " : exit status 80
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-arm64 start -p cert-expiration-498000 --memory=2048 --cert-expiration=8760h --driver=qemu2 
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p cert-expiration-498000 --memory=2048 --cert-expiration=8760h --driver=qemu2 : exit status 80 (5.251090459s)

                                                
                                                
-- stdout --
	* [cert-expiration-498000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-498000" primary control-plane node in "cert-expiration-498000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-498000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-498000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-498000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-arm64 start -p cert-expiration-498000 --memory=2048 --cert-expiration=8760h --driver=qemu2 " : exit status 80
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-498000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "cert-expiration-498000" primary control-plane node in "cert-expiration-498000" cluster
	* Restarting existing qemu2 VM for "cert-expiration-498000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "cert-expiration-498000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p cert-expiration-498000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-03-20 17:27:59.106105 -0700 PDT m=+1665.366443126
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-498000 -n cert-expiration-498000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p cert-expiration-498000 -n cert-expiration-498000: exit status 7 (68.518917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "cert-expiration-498000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-498000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cert-expiration-498000
--- FAIL: TestCertExpiration (198.09s)

                                                
                                    
x
+
TestDockerFlags (12.58s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-arm64 start -p docker-flags-580000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:51: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p docker-flags-580000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (12.313973291s)

                                                
                                                
-- stdout --
	* [docker-flags-580000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "docker-flags-580000" primary control-plane node in "docker-flags-580000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "docker-flags-580000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:24:51.452930   14987 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:24:51.453079   14987 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:24:51.453085   14987 out.go:304] Setting ErrFile to fd 2...
	I0320 17:24:51.453088   14987 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:24:51.453231   14987 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:24:51.454644   14987 out.go:298] Setting JSON to false
	I0320 17:24:51.474356   14987 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8661,"bootTime":1710972030,"procs":514,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:24:51.474464   14987 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:24:51.538643   14987 out.go:177] * [docker-flags-580000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:24:51.545595   14987 notify.go:220] Checking for updates...
	I0320 17:24:51.549546   14987 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:24:51.559554   14987 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:24:51.567641   14987 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:24:51.584559   14987 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:24:51.591562   14987 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:24:51.597494   14987 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:24:51.601242   14987 config.go:182] Loaded profile config "cert-expiration-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:24:51.601360   14987 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:24:51.601446   14987 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:24:51.605607   14987 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:24:51.613572   14987 start.go:297] selected driver: qemu2
	I0320 17:24:51.613583   14987 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:24:51.613595   14987 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:24:51.617895   14987 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:24:51.627699   14987 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:24:51.630817   14987 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0320 17:24:51.630966   14987 cni.go:84] Creating CNI manager for ""
	I0320 17:24:51.631015   14987 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:24:51.631033   14987 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0320 17:24:51.631163   14987 start.go:340] cluster config:
	{Name:docker-flags-580000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMn
etClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:24:51.639825   14987 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:24:51.657589   14987 out.go:177] * Starting "docker-flags-580000" primary control-plane node in "docker-flags-580000" cluster
	I0320 17:24:51.662583   14987 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:24:51.662626   14987 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:24:51.662634   14987 cache.go:56] Caching tarball of preloaded images
	I0320 17:24:51.662751   14987 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:24:51.662772   14987 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:24:51.662878   14987 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/docker-flags-580000/config.json ...
	I0320 17:24:51.662896   14987 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/docker-flags-580000/config.json: {Name:mk31d655d84a80e466f8c24ed47aa82442ab0efb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:24:51.663469   14987 start.go:360] acquireMachinesLock for docker-flags-580000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:24:53.815019   14987 start.go:364] duration metric: took 2.151554625s to acquireMachinesLock for "docker-flags-580000"
	I0320 17:24:53.815166   14987 start.go:93] Provisioning new machine with config: &{Name:docker-flags-580000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:24:53.815376   14987 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:24:53.826963   14987 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0320 17:24:53.875976   14987 start.go:159] libmachine.API.Create for "docker-flags-580000" (driver="qemu2")
	I0320 17:24:53.876021   14987 client.go:168] LocalClient.Create starting
	I0320 17:24:53.876148   14987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:24:53.876189   14987 main.go:141] libmachine: Decoding PEM data...
	I0320 17:24:53.876209   14987 main.go:141] libmachine: Parsing certificate...
	I0320 17:24:53.876285   14987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:24:53.876312   14987 main.go:141] libmachine: Decoding PEM data...
	I0320 17:24:53.876324   14987 main.go:141] libmachine: Parsing certificate...
	I0320 17:24:53.876995   14987 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:24:54.057957   14987 main.go:141] libmachine: Creating SSH key...
	I0320 17:24:54.150070   14987 main.go:141] libmachine: Creating Disk image...
	I0320 17:24:54.150079   14987 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:24:54.150297   14987 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/disk.qcow2
	I0320 17:24:54.162419   14987 main.go:141] libmachine: STDOUT: 
	I0320 17:24:54.162438   14987 main.go:141] libmachine: STDERR: 
	I0320 17:24:54.162491   14987 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/disk.qcow2 +20000M
	I0320 17:24:54.173175   14987 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:24:54.173191   14987 main.go:141] libmachine: STDERR: 
	I0320 17:24:54.173210   14987 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/disk.qcow2
	I0320 17:24:54.173214   14987 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:24:54.173244   14987 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/qemu.pid -device virtio-net-pci,netdev=net0,mac=4a:17:ee:a4:76:f5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/disk.qcow2
	I0320 17:24:54.174932   14987 main.go:141] libmachine: STDOUT: 
	I0320 17:24:54.174949   14987 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:24:54.174968   14987 client.go:171] duration metric: took 298.951833ms to LocalClient.Create
	I0320 17:24:56.177076   14987 start.go:128] duration metric: took 2.36176125s to createHost
	I0320 17:24:56.177240   14987 start.go:83] releasing machines lock for "docker-flags-580000", held for 2.36218125s
	W0320 17:24:56.177315   14987 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:24:56.190565   14987 out.go:177] * Deleting "docker-flags-580000" in qemu2 ...
	W0320 17:24:56.217369   14987 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:24:56.217404   14987 start.go:728] Will try again in 5 seconds ...
	I0320 17:25:01.219421   14987 start.go:360] acquireMachinesLock for docker-flags-580000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:25:01.219895   14987 start.go:364] duration metric: took 376.75µs to acquireMachinesLock for "docker-flags-580000"
	I0320 17:25:01.220027   14987 start.go:93] Provisioning new machine with config: &{Name:docker-flags-580000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey
: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:docker-flags-580000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:25:01.220382   14987 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:25:01.230943   14987 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0320 17:25:01.279892   14987 start.go:159] libmachine.API.Create for "docker-flags-580000" (driver="qemu2")
	I0320 17:25:01.279944   14987 client.go:168] LocalClient.Create starting
	I0320 17:25:01.280061   14987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:25:01.280124   14987 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:01.280144   14987 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:01.280213   14987 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:25:01.280254   14987 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:01.280269   14987 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:01.280821   14987 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:25:01.436079   14987 main.go:141] libmachine: Creating SSH key...
	I0320 17:25:01.659672   14987 main.go:141] libmachine: Creating Disk image...
	I0320 17:25:01.659681   14987 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:25:01.659945   14987 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/disk.qcow2
	I0320 17:25:01.673122   14987 main.go:141] libmachine: STDOUT: 
	I0320 17:25:01.673142   14987 main.go:141] libmachine: STDERR: 
	I0320 17:25:01.673189   14987 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/disk.qcow2 +20000M
	I0320 17:25:01.683933   14987 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:25:01.683952   14987 main.go:141] libmachine: STDERR: 
	I0320 17:25:01.683966   14987 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/disk.qcow2
	I0320 17:25:01.683970   14987 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:25:01.684020   14987 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/qemu.pid -device virtio-net-pci,netdev=net0,mac=82:e7:fc:d2:7a:d0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/docker-flags-580000/disk.qcow2
	I0320 17:25:01.685773   14987 main.go:141] libmachine: STDOUT: 
	I0320 17:25:01.685787   14987 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:25:01.685799   14987 client.go:171] duration metric: took 405.865916ms to LocalClient.Create
	I0320 17:25:03.687900   14987 start.go:128] duration metric: took 2.467572417s to createHost
	I0320 17:25:03.687964   14987 start.go:83] releasing machines lock for "docker-flags-580000", held for 2.468140417s
	W0320 17:25:03.688338   14987 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p docker-flags-580000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p docker-flags-580000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:25:03.699935   14987 out.go:177] 
	W0320 17:25:03.705938   14987 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:25:03.705960   14987 out.go:239] * 
	* 
	W0320 17:25:03.708763   14987 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:25:03.718869   14987 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:53: failed to start minikube with args: "out/minikube-darwin-arm64 start -p docker-flags-580000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:56: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-580000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:56: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-580000 ssh "sudo systemctl show docker --property=Environment --no-pager": exit status 83 (82.67625ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-580000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-580000"

                                                
                                                
-- /stdout --
docker_test.go:58: failed to 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-580000 ssh \"sudo systemctl show docker --property=Environment --no-pager\"": exit status 83
docker_test.go:63: expected env key/value "FOO=BAR" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-580000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-580000\"\n"*.
docker_test.go:63: expected env key/value "BAZ=BAT" to be passed to minikube's docker and be included in: *"* The control-plane node docker-flags-580000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-580000\"\n"*.
docker_test.go:67: (dbg) Run:  out/minikube-darwin-arm64 -p docker-flags-580000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p docker-flags-580000 ssh "sudo systemctl show docker --property=ExecStart --no-pager": exit status 83 (45.822958ms)

                                                
                                                
-- stdout --
	* The control-plane node docker-flags-580000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p docker-flags-580000"

                                                
                                                
-- /stdout --
docker_test.go:69: failed on the second 'systemctl show docker' inside minikube. args "out/minikube-darwin-arm64 -p docker-flags-580000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"": exit status 83
docker_test.go:73: expected "out/minikube-darwin-arm64 -p docker-flags-580000 ssh \"sudo systemctl show docker --property=ExecStart --no-pager\"" output to have include *--debug* . output: "* The control-plane node docker-flags-580000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p docker-flags-580000\"\n"
panic.go:626: *** TestDockerFlags FAILED at 2024-03-20 17:25:03.859846 -0700 PDT m=+1490.113349835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-580000 -n docker-flags-580000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p docker-flags-580000 -n docker-flags-580000: exit status 7 (31.419958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "docker-flags-580000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "docker-flags-580000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p docker-flags-580000
--- FAIL: TestDockerFlags (12.58s)

                                                
                                    
x
+
TestForceSystemdFlag (10.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-flag-028000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:91: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-flag-028000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.8949005s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-028000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-flag-028000" primary control-plane node in "force-systemd-flag-028000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-flag-028000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:24:16.117296   14824 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:24:16.117424   14824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:24:16.117427   14824 out.go:304] Setting ErrFile to fd 2...
	I0320 17:24:16.117429   14824 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:24:16.117559   14824 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:24:16.118645   14824 out.go:298] Setting JSON to false
	I0320 17:24:16.134913   14824 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8626,"bootTime":1710972030,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:24:16.134973   14824 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:24:16.139875   14824 out.go:177] * [force-systemd-flag-028000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:24:16.146829   14824 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:24:16.150776   14824 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:24:16.146895   14824 notify.go:220] Checking for updates...
	I0320 17:24:16.153724   14824 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:24:16.156799   14824 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:24:16.159812   14824 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:24:16.162771   14824 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:24:16.166103   14824 config.go:182] Loaded profile config "NoKubernetes-342000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:24:16.166169   14824 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:24:16.166217   14824 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:24:16.170807   14824 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:24:16.177777   14824 start.go:297] selected driver: qemu2
	I0320 17:24:16.177783   14824 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:24:16.177788   14824 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:24:16.180063   14824 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:24:16.183845   14824 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:24:16.186806   14824 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0320 17:24:16.186855   14824 cni.go:84] Creating CNI manager for ""
	I0320 17:24:16.186863   14824 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:24:16.186868   14824 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0320 17:24:16.186902   14824 start.go:340] cluster config:
	{Name:force-systemd-flag-028000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-028000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:24:16.191373   14824 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:24:16.198782   14824 out.go:177] * Starting "force-systemd-flag-028000" primary control-plane node in "force-systemd-flag-028000" cluster
	I0320 17:24:16.202803   14824 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:24:16.202825   14824 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:24:16.202836   14824 cache.go:56] Caching tarball of preloaded images
	I0320 17:24:16.202905   14824 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:24:16.202911   14824 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:24:16.203005   14824 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/force-systemd-flag-028000/config.json ...
	I0320 17:24:16.203017   14824 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/force-systemd-flag-028000/config.json: {Name:mkda678a47dc124b797b159c68aedf45c2cc0034 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:24:16.203245   14824 start.go:360] acquireMachinesLock for force-systemd-flag-028000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:24:16.203280   14824 start.go:364] duration metric: took 27.416µs to acquireMachinesLock for "force-systemd-flag-028000"
	I0320 17:24:16.203294   14824 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-028000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-028000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:24:16.203325   14824 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:24:16.207589   14824 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0320 17:24:16.225085   14824 start.go:159] libmachine.API.Create for "force-systemd-flag-028000" (driver="qemu2")
	I0320 17:24:16.225114   14824 client.go:168] LocalClient.Create starting
	I0320 17:24:16.225174   14824 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:24:16.225228   14824 main.go:141] libmachine: Decoding PEM data...
	I0320 17:24:16.225237   14824 main.go:141] libmachine: Parsing certificate...
	I0320 17:24:16.225269   14824 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:24:16.225291   14824 main.go:141] libmachine: Decoding PEM data...
	I0320 17:24:16.225297   14824 main.go:141] libmachine: Parsing certificate...
	I0320 17:24:16.225712   14824 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:24:16.373081   14824 main.go:141] libmachine: Creating SSH key...
	I0320 17:24:16.443677   14824 main.go:141] libmachine: Creating Disk image...
	I0320 17:24:16.443682   14824 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:24:16.443870   14824 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/disk.qcow2
	I0320 17:24:16.456058   14824 main.go:141] libmachine: STDOUT: 
	I0320 17:24:16.456079   14824 main.go:141] libmachine: STDERR: 
	I0320 17:24:16.456138   14824 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/disk.qcow2 +20000M
	I0320 17:24:16.467108   14824 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:24:16.467128   14824 main.go:141] libmachine: STDERR: 
	I0320 17:24:16.467140   14824 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/disk.qcow2
	I0320 17:24:16.467144   14824 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:24:16.467169   14824 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:e4:36:df:94:f3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/disk.qcow2
	I0320 17:24:16.469104   14824 main.go:141] libmachine: STDOUT: 
	I0320 17:24:16.469120   14824 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:24:16.469139   14824 client.go:171] duration metric: took 244.029208ms to LocalClient.Create
	I0320 17:24:18.471363   14824 start.go:128] duration metric: took 2.268099792s to createHost
	I0320 17:24:18.471429   14824 start.go:83] releasing machines lock for "force-systemd-flag-028000", held for 2.268227292s
	W0320 17:24:18.471488   14824 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:24:18.497644   14824 out.go:177] * Deleting "force-systemd-flag-028000" in qemu2 ...
	W0320 17:24:18.518362   14824 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:24:18.518382   14824 start.go:728] Will try again in 5 seconds ...
	I0320 17:24:23.520347   14824 start.go:360] acquireMachinesLock for force-systemd-flag-028000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:24:23.520686   14824 start.go:364] duration metric: took 265.875µs to acquireMachinesLock for "force-systemd-flag-028000"
	I0320 17:24:23.520824   14824 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-028000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-flag-028000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:24:23.521086   14824 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:24:23.525682   14824 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0320 17:24:23.575878   14824 start.go:159] libmachine.API.Create for "force-systemd-flag-028000" (driver="qemu2")
	I0320 17:24:23.575934   14824 client.go:168] LocalClient.Create starting
	I0320 17:24:23.576051   14824 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:24:23.576110   14824 main.go:141] libmachine: Decoding PEM data...
	I0320 17:24:23.576134   14824 main.go:141] libmachine: Parsing certificate...
	I0320 17:24:23.576216   14824 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:24:23.576244   14824 main.go:141] libmachine: Decoding PEM data...
	I0320 17:24:23.576256   14824 main.go:141] libmachine: Parsing certificate...
	I0320 17:24:23.576814   14824 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:24:23.730745   14824 main.go:141] libmachine: Creating SSH key...
	I0320 17:24:23.906594   14824 main.go:141] libmachine: Creating Disk image...
	I0320 17:24:23.906603   14824 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:24:23.906826   14824 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/disk.qcow2
	I0320 17:24:23.919587   14824 main.go:141] libmachine: STDOUT: 
	I0320 17:24:23.919631   14824 main.go:141] libmachine: STDERR: 
	I0320 17:24:23.919682   14824 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/disk.qcow2 +20000M
	I0320 17:24:23.930281   14824 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:24:23.930296   14824 main.go:141] libmachine: STDERR: 
	I0320 17:24:23.930310   14824 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/disk.qcow2
	I0320 17:24:23.930319   14824 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:24:23.930358   14824 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/qemu.pid -device virtio-net-pci,netdev=net0,mac=c2:5a:59:94:4c:13 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-flag-028000/disk.qcow2
	I0320 17:24:23.932066   14824 main.go:141] libmachine: STDOUT: 
	I0320 17:24:23.932085   14824 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:24:23.932097   14824 client.go:171] duration metric: took 356.1705ms to LocalClient.Create
	I0320 17:24:25.934187   14824 start.go:128] duration metric: took 2.413163042s to createHost
	I0320 17:24:25.934287   14824 start.go:83] releasing machines lock for "force-systemd-flag-028000", held for 2.413640916s
	W0320 17:24:25.934667   14824 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-028000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-flag-028000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:24:25.944391   14824 out.go:177] 
	W0320 17:24:25.951451   14824 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:24:25.951484   14824 out.go:239] * 
	* 
	W0320 17:24:25.954201   14824 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:24:25.963390   14824 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-flag-028000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-flag-028000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-flag-028000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.352709ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-flag-028000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-flag-028000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-flag-028000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2024-03-20 17:24:26.064559 -0700 PDT m=+1452.316588918
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-028000 -n force-systemd-flag-028000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-flag-028000 -n force-systemd-flag-028000: exit status 7 (35.819916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-flag-028000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-028000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-flag-028000
--- FAIL: TestForceSystemdFlag (10.11s)

                                                
                                    
x
+
TestForceSystemdEnv (10.22s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-arm64 start -p force-systemd-env-268000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 
docker_test.go:155: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p force-systemd-env-268000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 : exit status 80 (9.991304166s)

                                                
                                                
-- stdout --
	* [force-systemd-env-268000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "force-systemd-env-268000" primary control-plane node in "force-systemd-env-268000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "force-systemd-env-268000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:24:41.223277   14952 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:24:41.223407   14952 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:24:41.223410   14952 out.go:304] Setting ErrFile to fd 2...
	I0320 17:24:41.223413   14952 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:24:41.223550   14952 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:24:41.225030   14952 out.go:298] Setting JSON to false
	I0320 17:24:41.243191   14952 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8651,"bootTime":1710972030,"procs":530,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:24:41.243256   14952 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:24:41.248596   14952 out.go:177] * [force-systemd-env-268000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:24:41.267383   14952 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:24:41.258555   14952 notify.go:220] Checking for updates...
	I0320 17:24:41.277429   14952 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:24:41.286297   14952 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:24:41.292407   14952 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:24:41.297411   14952 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:24:41.303403   14952 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0320 17:24:41.306921   14952 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:24:41.306976   14952 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:24:41.314270   14952 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:24:41.322448   14952 start.go:297] selected driver: qemu2
	I0320 17:24:41.322453   14952 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:24:41.322459   14952 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:24:41.325129   14952 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:24:41.333386   14952 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:24:41.337463   14952 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0320 17:24:41.337516   14952 cni.go:84] Creating CNI manager for ""
	I0320 17:24:41.337524   14952 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:24:41.337529   14952 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0320 17:24:41.337561   14952 start.go:340] cluster config:
	{Name:force-systemd-env-268000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-268000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:24:41.342999   14952 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:24:41.347396   14952 out.go:177] * Starting "force-systemd-env-268000" primary control-plane node in "force-systemd-env-268000" cluster
	I0320 17:24:41.358392   14952 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:24:41.358410   14952 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:24:41.358421   14952 cache.go:56] Caching tarball of preloaded images
	I0320 17:24:41.358508   14952 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:24:41.358515   14952 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:24:41.358597   14952 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/force-systemd-env-268000/config.json ...
	I0320 17:24:41.358610   14952 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/force-systemd-env-268000/config.json: {Name:mkf142f199e325bd3630a4264754914aac7e65d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:24:41.362457   14952 start.go:360] acquireMachinesLock for force-systemd-env-268000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:24:41.362542   14952 start.go:364] duration metric: took 63.792µs to acquireMachinesLock for "force-systemd-env-268000"
	I0320 17:24:41.362562   14952 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-268000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:24:41.362595   14952 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:24:41.371470   14952 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0320 17:24:41.391009   14952 start.go:159] libmachine.API.Create for "force-systemd-env-268000" (driver="qemu2")
	I0320 17:24:41.391043   14952 client.go:168] LocalClient.Create starting
	I0320 17:24:41.391107   14952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:24:41.391143   14952 main.go:141] libmachine: Decoding PEM data...
	I0320 17:24:41.391152   14952 main.go:141] libmachine: Parsing certificate...
	I0320 17:24:41.391212   14952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:24:41.391236   14952 main.go:141] libmachine: Decoding PEM data...
	I0320 17:24:41.391244   14952 main.go:141] libmachine: Parsing certificate...
	I0320 17:24:41.391651   14952 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:24:41.647422   14952 main.go:141] libmachine: Creating SSH key...
	I0320 17:24:41.741813   14952 main.go:141] libmachine: Creating Disk image...
	I0320 17:24:41.741819   14952 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:24:41.741989   14952 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/disk.qcow2
	I0320 17:24:41.754666   14952 main.go:141] libmachine: STDOUT: 
	I0320 17:24:41.754682   14952 main.go:141] libmachine: STDERR: 
	I0320 17:24:41.754742   14952 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/disk.qcow2 +20000M
	I0320 17:24:41.765672   14952 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:24:41.765691   14952 main.go:141] libmachine: STDERR: 
	I0320 17:24:41.765718   14952 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/disk.qcow2
	I0320 17:24:41.765727   14952 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:24:41.765766   14952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ce:80:8d:1e:af:bc -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/disk.qcow2
	I0320 17:24:41.767600   14952 main.go:141] libmachine: STDOUT: 
	I0320 17:24:41.767619   14952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:24:41.767636   14952 client.go:171] duration metric: took 376.601375ms to LocalClient.Create
	I0320 17:24:43.769815   14952 start.go:128] duration metric: took 2.407282125s to createHost
	I0320 17:24:43.769902   14952 start.go:83] releasing machines lock for "force-systemd-env-268000", held for 2.407441875s
	W0320 17:24:43.770026   14952 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:24:43.796435   14952 out.go:177] * Deleting "force-systemd-env-268000" in qemu2 ...
	W0320 17:24:43.818744   14952 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:24:43.818763   14952 start.go:728] Will try again in 5 seconds ...
	I0320 17:24:48.819904   14952 start.go:360] acquireMachinesLock for force-systemd-env-268000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:24:48.820337   14952 start.go:364] duration metric: took 323.042µs to acquireMachinesLock for "force-systemd-env-268000"
	I0320 17:24:48.820471   14952 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-268000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.29.3 ClusterName:force-systemd-env-268000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:24:48.820768   14952 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:24:48.830404   14952 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0320 17:24:48.879008   14952 start.go:159] libmachine.API.Create for "force-systemd-env-268000" (driver="qemu2")
	I0320 17:24:48.879058   14952 client.go:168] LocalClient.Create starting
	I0320 17:24:48.879171   14952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:24:48.879233   14952 main.go:141] libmachine: Decoding PEM data...
	I0320 17:24:48.879247   14952 main.go:141] libmachine: Parsing certificate...
	I0320 17:24:48.879306   14952 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:24:48.879355   14952 main.go:141] libmachine: Decoding PEM data...
	I0320 17:24:48.879368   14952 main.go:141] libmachine: Parsing certificate...
	I0320 17:24:48.880553   14952 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:24:49.048442   14952 main.go:141] libmachine: Creating SSH key...
	I0320 17:24:49.111177   14952 main.go:141] libmachine: Creating Disk image...
	I0320 17:24:49.111184   14952 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:24:49.111365   14952 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/disk.qcow2
	I0320 17:24:49.123602   14952 main.go:141] libmachine: STDOUT: 
	I0320 17:24:49.123627   14952 main.go:141] libmachine: STDERR: 
	I0320 17:24:49.123688   14952 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/disk.qcow2 +20000M
	I0320 17:24:49.134547   14952 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:24:49.134565   14952 main.go:141] libmachine: STDERR: 
	I0320 17:24:49.134581   14952 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/disk.qcow2
	I0320 17:24:49.134585   14952 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:24:49.134629   14952 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2048 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fa:52:27:ff:be:8d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/force-systemd-env-268000/disk.qcow2
	I0320 17:24:49.136336   14952 main.go:141] libmachine: STDOUT: 
	I0320 17:24:49.136351   14952 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:24:49.136368   14952 client.go:171] duration metric: took 257.315042ms to LocalClient.Create
	I0320 17:24:51.138467   14952 start.go:128] duration metric: took 2.317757709s to createHost
	I0320 17:24:51.138517   14952 start.go:83] releasing machines lock for "force-systemd-env-268000", held for 2.318245417s
	W0320 17:24:51.138899   14952 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-268000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p force-systemd-env-268000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:24:51.150691   14952 out.go:177] 
	W0320 17:24:51.155638   14952 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:24:51.155664   14952 out.go:239] * 
	* 
	W0320 17:24:51.158239   14952 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:24:51.167588   14952 out.go:177] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-darwin-arm64 start -p force-systemd-env-268000 --memory=2048 --alsologtostderr -v=5 --driver=qemu2 " : exit status 80
docker_test.go:110: (dbg) Run:  out/minikube-darwin-arm64 -p force-systemd-env-268000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p force-systemd-env-268000 ssh "docker info --format {{.CgroupDriver}}": exit status 83 (80.632084ms)

                                                
                                                
-- stdout --
	* The control-plane node force-systemd-env-268000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p force-systemd-env-268000"

                                                
                                                
-- /stdout --
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-darwin-arm64 -p force-systemd-env-268000 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 83
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2024-03-20 17:24:51.266744 -0700 PDT m=+1477.519756168
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-268000 -n force-systemd-env-268000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p force-systemd-env-268000 -n force-systemd-env-268000: exit status 7 (35.266375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "force-systemd-env-268000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-268000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p force-systemd-env-268000
--- FAIL: TestForceSystemdEnv (10.22s)

                                                
                                    
x
+
TestErrorSpam/setup (9.79s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -p nospam-044000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 --driver=qemu2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p nospam-044000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 --driver=qemu2 : exit status 80 (9.783978416s)

                                                
                                                
-- stdout --
	* [nospam-044000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "nospam-044000" primary control-plane node in "nospam-044000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "nospam-044000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p nospam-044000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-darwin-arm64 start -p nospam-044000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 --driver=qemu2 " failed: exit status 80
error_spam_test.go:96: unexpected stderr: "! StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* Failed to start qemu2 VM. Running \"minikube delete -p nospam-044000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-044000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=17713
- KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "nospam-044000" primary control-plane node in "nospam-044000" cluster
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "nospam-044000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* Failed to start qemu2 VM. Running "minikube delete -p nospam-044000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (9.79s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (9.92s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-829000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 
functional_test.go:2230: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-829000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 : exit status 80 (9.851059041s)

                                                
                                                
-- stdout --
	* [functional-829000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "functional-829000" primary control-plane node in "functional-829000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "functional-829000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52114 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52114 to docker env.
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! Local proxy ignored: not passing HTTP_PROXY=localhost:52114 to docker env.
	* Failed to start qemu2 VM. Running "minikube delete -p functional-829000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2232: failed minikube start. args "out/minikube-darwin-arm64 start -p functional-829000 --memory=4000 --apiserver-port=8441 --wait=all --driver=qemu2 ": exit status 80
functional_test.go:2237: start stdout=* [functional-829000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
- MINIKUBE_LOCATION=17713
- KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
- MINIKUBE_FORCE_SYSTEMD=
* Using the qemu2 driver based on user configuration
* Automatically selected the socket_vmnet network
* Starting "functional-829000" primary control-plane node in "functional-829000" cluster
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
* Deleting "functional-829000" in qemu2 ...
* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
OUTPUT: 
ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                

                                                
                                                

                                                
                                                
, want: *Found network options:*
functional_test.go:2242: start stderr=! Local proxy ignored: not passing HTTP_PROXY=localhost:52114 to docker env.
! Local proxy ignored: not passing HTTP_PROXY=localhost:52114 to docker env.
! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
! Local proxy ignored: not passing HTTP_PROXY=localhost:52114 to docker env.
* Failed to start qemu2 VM. Running "minikube delete -p functional-829000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
, want: *You appear to be using a proxy*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (70.873625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/StartWithProxy (9.92s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-829000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-829000 --alsologtostderr -v=8: exit status 80 (5.192524125s)

                                                
                                                
-- stdout --
	* [functional-829000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-829000" primary control-plane node in "functional-829000" cluster
	* Restarting existing qemu2 VM for "functional-829000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-829000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:02:29.800191   12998 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:02:29.800326   12998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:02:29.800329   12998 out.go:304] Setting ErrFile to fd 2...
	I0320 17:02:29.800332   12998 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:02:29.800470   12998 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:02:29.801439   12998 out.go:298] Setting JSON to false
	I0320 17:02:29.817594   12998 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7319,"bootTime":1710972030,"procs":507,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:02:29.817662   12998 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:02:29.822209   12998 out.go:177] * [functional-829000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:02:29.828375   12998 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:02:29.831249   12998 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:02:29.828425   12998 notify.go:220] Checking for updates...
	I0320 17:02:29.839292   12998 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:02:29.842238   12998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:02:29.845233   12998 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:02:29.848314   12998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:02:29.851482   12998 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:02:29.851539   12998 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:02:29.856232   12998 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:02:29.863229   12998 start.go:297] selected driver: qemu2
	I0320 17:02:29.863236   12998 start.go:901] validating driver "qemu2" against &{Name:functional-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:02:29.863331   12998 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:02:29.865621   12998 cni.go:84] Creating CNI manager for ""
	I0320 17:02:29.865638   12998 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:02:29.865695   12998 start.go:340] cluster config:
	{Name:functional-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-829000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:02:29.870020   12998 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:02:29.878258   12998 out.go:177] * Starting "functional-829000" primary control-plane node in "functional-829000" cluster
	I0320 17:02:29.882237   12998 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:02:29.882252   12998 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:02:29.882265   12998 cache.go:56] Caching tarball of preloaded images
	I0320 17:02:29.882315   12998 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:02:29.882320   12998 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:02:29.882392   12998 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/functional-829000/config.json ...
	I0320 17:02:29.882859   12998 start.go:360] acquireMachinesLock for functional-829000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:02:29.882886   12998 start.go:364] duration metric: took 20.791µs to acquireMachinesLock for "functional-829000"
	I0320 17:02:29.882894   12998 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:02:29.882903   12998 fix.go:54] fixHost starting: 
	I0320 17:02:29.883019   12998 fix.go:112] recreateIfNeeded on functional-829000: state=Stopped err=<nil>
	W0320 17:02:29.883027   12998 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:02:29.890167   12998 out.go:177] * Restarting existing qemu2 VM for "functional-829000" ...
	I0320 17:02:29.894417   12998 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:96:84:42:b7:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/disk.qcow2
	I0320 17:02:29.896427   12998 main.go:141] libmachine: STDOUT: 
	I0320 17:02:29.896447   12998 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:02:29.896476   12998 fix.go:56] duration metric: took 13.575542ms for fixHost
	I0320 17:02:29.896480   12998 start.go:83] releasing machines lock for "functional-829000", held for 13.591083ms
	W0320 17:02:29.896488   12998 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:02:29.896528   12998 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:02:29.896533   12998 start.go:728] Will try again in 5 seconds ...
	I0320 17:02:34.898459   12998 start.go:360] acquireMachinesLock for functional-829000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:02:34.898722   12998 start.go:364] duration metric: took 211µs to acquireMachinesLock for "functional-829000"
	I0320 17:02:34.898877   12998 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:02:34.898898   12998 fix.go:54] fixHost starting: 
	I0320 17:02:34.899548   12998 fix.go:112] recreateIfNeeded on functional-829000: state=Stopped err=<nil>
	W0320 17:02:34.899574   12998 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:02:34.904874   12998 out.go:177] * Restarting existing qemu2 VM for "functional-829000" ...
	I0320 17:02:34.914244   12998 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:96:84:42:b7:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/disk.qcow2
	I0320 17:02:34.923991   12998 main.go:141] libmachine: STDOUT: 
	I0320 17:02:34.924082   12998 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:02:34.924182   12998 fix.go:56] duration metric: took 25.284083ms for fixHost
	I0320 17:02:34.924202   12998 start.go:83] releasing machines lock for "functional-829000", held for 25.452833ms
	W0320 17:02:34.924405   12998 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-829000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-829000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:02:34.932944   12998 out.go:177] 
	W0320 17:02:34.936079   12998 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:02:34.936103   12998 out.go:239] * 
	* 
	W0320 17:02:34.938747   12998 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:02:34.946900   12998 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:657: failed to soft start minikube. args "out/minikube-darwin-arm64 start -p functional-829000 --alsologtostderr -v=8": exit status 80
functional_test.go:659: soft start took 5.194349625s for "functional-829000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (70.5145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/SoftStart (5.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
functional_test.go:677: (dbg) Non-zero exit: kubectl config current-context: exit status 1 (29.233667ms)

                                                
                                                
** stderr ** 
	error: current-context is not set

                                                
                                                
** /stderr **
functional_test.go:679: failed to get current-context. args "kubectl config current-context" : exit status 1
functional_test.go:683: expected current-context = "functional-829000", but got *""*
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (32.538208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-829000 get po -A
functional_test.go:692: (dbg) Non-zero exit: kubectl --context functional-829000 get po -A: exit status 1 (26.043042ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-829000

                                                
                                                
** /stderr **
functional_test.go:694: failed to get kubectl pods: args "kubectl --context functional-829000 get po -A" : exit status 1
functional_test.go:698: expected stderr to be empty but got *"Error in configuration: context was not found for specified context: functional-829000\n"*: args "kubectl --context functional-829000 get po -A"
functional_test.go:701: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-829000 get po -A"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (31.853458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh sudo crictl images
functional_test.go:1120: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh sudo crictl images: exit status 83 (42.924417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1122: failed to get images by "out/minikube-darwin-arm64 -p functional-829000 ssh sudo crictl images" ssh exit status 83
functional_test.go:1126: expected sha for pause:3.3 "3d18732f8686c" to be in the output but got *
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --*
--- FAIL: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh sudo docker rmi registry.k8s.io/pause:latest: exit status 83 (43.784ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1146: failed to manually delete image "out/minikube-darwin-arm64 -p functional-829000 ssh sudo docker rmi registry.k8s.io/pause:latest" : exit status 83
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (45.042958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 83 (41.63125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1161: expected "out/minikube-darwin-arm64 -p functional-829000 ssh sudo crictl inspecti registry.k8s.io/pause:latest" to run successfully but got error: exit status 83
--- FAIL: TestFunctional/serial/CacheCmd/cache/cache_reload (0.17s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 kubectl -- --context functional-829000 get pods
functional_test.go:712: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 kubectl -- --context functional-829000 get pods: exit status 1 (656.715625ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-829000
	* no server found for cluster "functional-829000"

                                                
                                                
** /stderr **
functional_test.go:715: failed to get pods. args "out/minikube-darwin-arm64 -p functional-829000 kubectl -- --context functional-829000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (34.1765ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmd (0.69s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-829000 get pods
functional_test.go:737: (dbg) Non-zero exit: out/kubectl --context functional-829000 get pods: exit status 1 (899.163ms)

                                                
                                                
** stderr ** 
	Error in configuration: 
	* context was not found for specified context: functional-829000
	* no server found for cluster "functional-829000"

                                                
                                                
** /stderr **
functional_test.go:740: failed to run kubectl directly. args "out/kubectl --context functional-829000 get pods": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (31.445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.93s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-829000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-829000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 80 (5.198642583s)

                                                
                                                
-- stdout --
	* [functional-829000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "functional-829000" primary control-plane node in "functional-829000" cluster
	* Restarting existing qemu2 VM for "functional-829000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "functional-829000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p functional-829000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:755: failed to restart minikube. args "out/minikube-darwin-arm64 start -p functional-829000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 80
functional_test.go:757: restart took 5.199142667s for "functional-829000" cluster.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (73.568041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ExtraConfig (5.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-829000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:806: (dbg) Non-zero exit: kubectl --context functional-829000 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (29.15975ms)

                                                
                                                
** stderr ** 
	error: context "functional-829000" does not exist

                                                
                                                
** /stderr **
functional_test.go:808: failed to get components. args "kubectl --context functional-829000 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (32.306209ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 logs
functional_test.go:1232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 logs: exit status 83 (78.739333ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                  | download-only-293000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT |                     |
	|         | -p download-only-293000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT | 20 Mar 24 17:00 PDT |
	| delete  | -p download-only-293000                                                  | download-only-293000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT | 20 Mar 24 17:00 PDT |
	| start   | -o=json --download-only                                                  | download-only-885000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT |                     |
	|         | -p download-only-885000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3                                             |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
	| delete  | -p download-only-885000                                                  | download-only-885000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
	| start   | -o=json --download-only                                                  | download-only-684000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
	|         | -p download-only-684000                                                  |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                                      |                      |         |         |                     |                     |
	|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
	| delete  | -p download-only-684000                                                  | download-only-684000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
	| delete  | -p download-only-293000                                                  | download-only-293000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
	| delete  | -p download-only-885000                                                  | download-only-885000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
	| delete  | -p download-only-684000                                                  | download-only-684000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
	| start   | --download-only -p                                                       | binary-mirror-025000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
	|         | binary-mirror-025000                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
	|         | --binary-mirror                                                          |                      |         |         |                     |                     |
	|         | http://127.0.0.1:52078                                                   |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-025000                                                  | binary-mirror-025000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
	| addons  | enable dashboard -p                                                      | addons-367000        | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
	|         | addons-367000                                                            |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                     | addons-367000        | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
	|         | addons-367000                                                            |                      |         |         |                     |                     |
	| start   | -p addons-367000 --wait=true                                             | addons-367000        | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
	|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
	|         | --addons=registry                                                        |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
	|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
	| delete  | -p addons-367000                                                         | addons-367000        | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
	| start   | -p nospam-044000 -n=1 --memory=2250 --wait=false                         | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
	|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 |                      |         |         |                     |                     |
	|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
	| start   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| start   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
	|         | start --dry-run                                                          |                      |         |         |                     |                     |
	| pause   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| pause   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
	|         | pause                                                                    |                      |         |         |                     |                     |
	| unpause | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| unpause | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
	|         | unpause                                                                  |                      |         |         |                     |                     |
	| stop    | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| stop    | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
	|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
	|         | stop                                                                     |                      |         |         |                     |                     |
	| delete  | -p nospam-044000                                                         | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
	| start   | -p functional-829000                                                     | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | --memory=4000                                                            |                      |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
	|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
	| start   | -p functional-829000                                                     | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
	| cache   | functional-829000 cache add                                              | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | functional-829000 cache add                                              | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | functional-829000 cache add                                              | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-829000 cache add                                              | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
	|         | minikube-local-cache-test:functional-829000                              |                      |         |         |                     |                     |
	| cache   | functional-829000 cache delete                                           | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
	|         | minikube-local-cache-test:functional-829000                              |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
	|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
	| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
	| ssh     | functional-829000 ssh sudo                                               | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | crictl images                                                            |                      |         |         |                     |                     |
	| ssh     | functional-829000                                                        | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| ssh     | functional-829000 ssh                                                    | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | functional-829000 cache reload                                           | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
	| ssh     | functional-829000 ssh                                                    | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
	|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
	| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
	|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
	| kubectl | functional-829000 kubectl --                                             | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | --context functional-829000                                              |                      |         |         |                     |                     |
	|         | get pods                                                                 |                      |         |         |                     |                     |
	| start   | -p functional-829000                                                     | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
	|         | --wait=all                                                               |                      |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/20 17:02:44
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0320 17:02:44.624527   13083 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:02:44.624624   13083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:02:44.624626   13083 out.go:304] Setting ErrFile to fd 2...
	I0320 17:02:44.624628   13083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:02:44.624748   13083 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:02:44.625695   13083 out.go:298] Setting JSON to false
	I0320 17:02:44.641461   13083 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7334,"bootTime":1710972030,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:02:44.641517   13083 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:02:44.647499   13083 out.go:177] * [functional-829000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:02:44.657140   13083 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:02:44.657176   13083 notify.go:220] Checking for updates...
	I0320 17:02:44.666098   13083 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:02:44.669870   13083 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:02:44.673070   13083 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:02:44.676074   13083 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:02:44.679110   13083 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:02:44.682411   13083 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:02:44.682459   13083 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:02:44.687058   13083 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:02:44.696050   13083 start.go:297] selected driver: qemu2
	I0320 17:02:44.696054   13083 start.go:901] validating driver "qemu2" against &{Name:functional-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:02:44.696111   13083 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:02:44.698500   13083 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:02:44.698554   13083 cni.go:84] Creating CNI manager for ""
	I0320 17:02:44.698563   13083 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:02:44.698604   13083 start.go:340] cluster config:
	{Name:functional-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-829000 Namespace:default APIServe
rHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:02:44.703384   13083 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:02:44.712049   13083 out.go:177] * Starting "functional-829000" primary control-plane node in "functional-829000" cluster
	I0320 17:02:44.715074   13083 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:02:44.715090   13083 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:02:44.715098   13083 cache.go:56] Caching tarball of preloaded images
	I0320 17:02:44.715164   13083 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:02:44.715170   13083 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:02:44.715239   13083 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/functional-829000/config.json ...
	I0320 17:02:44.715798   13083 start.go:360] acquireMachinesLock for functional-829000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:02:44.715836   13083 start.go:364] duration metric: took 33.167µs to acquireMachinesLock for "functional-829000"
	I0320 17:02:44.715846   13083 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:02:44.715851   13083 fix.go:54] fixHost starting: 
	I0320 17:02:44.715974   13083 fix.go:112] recreateIfNeeded on functional-829000: state=Stopped err=<nil>
	W0320 17:02:44.715982   13083 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:02:44.723057   13083 out.go:177] * Restarting existing qemu2 VM for "functional-829000" ...
	I0320 17:02:44.729091   13083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:96:84:42:b7:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/disk.qcow2
	I0320 17:02:44.731319   13083 main.go:141] libmachine: STDOUT: 
	I0320 17:02:44.731339   13083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:02:44.731371   13083 fix.go:56] duration metric: took 15.520708ms for fixHost
	I0320 17:02:44.731380   13083 start.go:83] releasing machines lock for "functional-829000", held for 15.536625ms
	W0320 17:02:44.731387   13083 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:02:44.731441   13083 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:02:44.731446   13083 start.go:728] Will try again in 5 seconds ...
	I0320 17:02:49.732075   13083 start.go:360] acquireMachinesLock for functional-829000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:02:49.732545   13083 start.go:364] duration metric: took 320.291µs to acquireMachinesLock for "functional-829000"
	I0320 17:02:49.732645   13083 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:02:49.732658   13083 fix.go:54] fixHost starting: 
	I0320 17:02:49.733309   13083 fix.go:112] recreateIfNeeded on functional-829000: state=Stopped err=<nil>
	W0320 17:02:49.733327   13083 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:02:49.742704   13083 out.go:177] * Restarting existing qemu2 VM for "functional-829000" ...
	I0320 17:02:49.748906   13083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:96:84:42:b7:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/disk.qcow2
	I0320 17:02:49.757835   13083 main.go:141] libmachine: STDOUT: 
	I0320 17:02:49.757881   13083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:02:49.757966   13083 fix.go:56] duration metric: took 25.311583ms for fixHost
	I0320 17:02:49.757978   13083 start.go:83] releasing machines lock for "functional-829000", held for 25.415958ms
	W0320 17:02:49.758128   13083 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-829000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:02:49.765641   13083 out.go:177] 
	W0320 17:02:49.769806   13083 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:02:49.769837   13083 out.go:239] * 
	W0320 17:02:49.771175   13083 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:02:49.781822   13083 out.go:177] 
	
	
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1234: out/minikube-darwin-arm64 -p functional-829000 logs failed: exit status 83
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-293000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT |                     |
|         | -p download-only-293000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT | 20 Mar 24 17:00 PDT |
| delete  | -p download-only-293000                                                  | download-only-293000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT | 20 Mar 24 17:00 PDT |
| start   | -o=json --download-only                                                  | download-only-885000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT |                     |
|         | -p download-only-885000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| delete  | -p download-only-885000                                                  | download-only-885000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| start   | -o=json --download-only                                                  | download-only-684000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
|         | -p download-only-684000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| delete  | -p download-only-684000                                                  | download-only-684000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| delete  | -p download-only-293000                                                  | download-only-293000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| delete  | -p download-only-885000                                                  | download-only-885000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| delete  | -p download-only-684000                                                  | download-only-684000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| start   | --download-only -p                                                       | binary-mirror-025000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
|         | binary-mirror-025000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52078                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-025000                                                  | binary-mirror-025000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| addons  | enable dashboard -p                                                      | addons-367000        | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
|         | addons-367000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-367000        | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
|         | addons-367000                                                            |                      |         |         |                     |                     |
| start   | -p addons-367000 --wait=true                                             | addons-367000        | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-367000                                                         | addons-367000        | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| start   | -p nospam-044000 -n=1 --memory=2250 --wait=false                         | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-044000                                                         | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
| start   | -p functional-829000                                                     | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-829000                                                     | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-829000 cache add                                              | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-829000 cache add                                              | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-829000 cache add                                              | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-829000 cache add                                              | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | minikube-local-cache-test:functional-829000                              |                      |         |         |                     |                     |
| cache   | functional-829000 cache delete                                           | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | minikube-local-cache-test:functional-829000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
| ssh     | functional-829000 ssh sudo                                               | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-829000                                                        | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-829000 ssh                                                    | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-829000 cache reload                                           | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
| ssh     | functional-829000 ssh                                                    | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-829000 kubectl --                                             | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | --context functional-829000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-829000                                                     | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/20 17:02:44
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0320 17:02:44.624527   13083 out.go:291] Setting OutFile to fd 1 ...
I0320 17:02:44.624624   13083 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:02:44.624626   13083 out.go:304] Setting ErrFile to fd 2...
I0320 17:02:44.624628   13083 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:02:44.624748   13083 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
I0320 17:02:44.625695   13083 out.go:298] Setting JSON to false
I0320 17:02:44.641461   13083 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7334,"bootTime":1710972030,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0320 17:02:44.641517   13083 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0320 17:02:44.647499   13083 out.go:177] * [functional-829000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
I0320 17:02:44.657140   13083 out.go:177]   - MINIKUBE_LOCATION=17713
I0320 17:02:44.657176   13083 notify.go:220] Checking for updates...
I0320 17:02:44.666098   13083 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
I0320 17:02:44.669870   13083 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0320 17:02:44.673070   13083 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0320 17:02:44.676074   13083 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
I0320 17:02:44.679110   13083 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0320 17:02:44.682411   13083 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0320 17:02:44.682459   13083 driver.go:392] Setting default libvirt URI to qemu:///system
I0320 17:02:44.687058   13083 out.go:177] * Using the qemu2 driver based on existing profile
I0320 17:02:44.696050   13083 start.go:297] selected driver: qemu2
I0320 17:02:44.696054   13083 start.go:901] validating driver "qemu2" against &{Name:functional-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:functional-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0320 17:02:44.696111   13083 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0320 17:02:44.698500   13083 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0320 17:02:44.698554   13083 cni.go:84] Creating CNI manager for ""
I0320 17:02:44.698563   13083 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0320 17:02:44.698604   13083 start.go:340] cluster config:
{Name:functional-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-829000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0320 17:02:44.703384   13083 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0320 17:02:44.712049   13083 out.go:177] * Starting "functional-829000" primary control-plane node in "functional-829000" cluster
I0320 17:02:44.715074   13083 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0320 17:02:44.715090   13083 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0320 17:02:44.715098   13083 cache.go:56] Caching tarball of preloaded images
I0320 17:02:44.715164   13083 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0320 17:02:44.715170   13083 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0320 17:02:44.715239   13083 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/functional-829000/config.json ...
I0320 17:02:44.715798   13083 start.go:360] acquireMachinesLock for functional-829000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0320 17:02:44.715836   13083 start.go:364] duration metric: took 33.167µs to acquireMachinesLock for "functional-829000"
I0320 17:02:44.715846   13083 start.go:96] Skipping create...Using existing machine configuration
I0320 17:02:44.715851   13083 fix.go:54] fixHost starting: 
I0320 17:02:44.715974   13083 fix.go:112] recreateIfNeeded on functional-829000: state=Stopped err=<nil>
W0320 17:02:44.715982   13083 fix.go:138] unexpected machine state, will restart: <nil>
I0320 17:02:44.723057   13083 out.go:177] * Restarting existing qemu2 VM for "functional-829000" ...
I0320 17:02:44.729091   13083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:96:84:42:b7:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/disk.qcow2
I0320 17:02:44.731319   13083 main.go:141] libmachine: STDOUT: 
I0320 17:02:44.731339   13083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0320 17:02:44.731371   13083 fix.go:56] duration metric: took 15.520708ms for fixHost
I0320 17:02:44.731380   13083 start.go:83] releasing machines lock for "functional-829000", held for 15.536625ms
W0320 17:02:44.731387   13083 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0320 17:02:44.731441   13083 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0320 17:02:44.731446   13083 start.go:728] Will try again in 5 seconds ...
I0320 17:02:49.732075   13083 start.go:360] acquireMachinesLock for functional-829000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0320 17:02:49.732545   13083 start.go:364] duration metric: took 320.291µs to acquireMachinesLock for "functional-829000"
I0320 17:02:49.732645   13083 start.go:96] Skipping create...Using existing machine configuration
I0320 17:02:49.732658   13083 fix.go:54] fixHost starting: 
I0320 17:02:49.733309   13083 fix.go:112] recreateIfNeeded on functional-829000: state=Stopped err=<nil>
W0320 17:02:49.733327   13083 fix.go:138] unexpected machine state, will restart: <nil>
I0320 17:02:49.742704   13083 out.go:177] * Restarting existing qemu2 VM for "functional-829000" ...
I0320 17:02:49.748906   13083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:96:84:42:b7:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/disk.qcow2
I0320 17:02:49.757835   13083 main.go:141] libmachine: STDOUT: 
I0320 17:02:49.757881   13083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0320 17:02:49.757966   13083 fix.go:56] duration metric: took 25.311583ms for fixHost
I0320 17:02:49.757978   13083 start.go:83] releasing machines lock for "functional-829000", held for 25.415958ms
W0320 17:02:49.758128   13083 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-829000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0320 17:02:49.765641   13083 out.go:177] 
W0320 17:02:49.769806   13083 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0320 17:02:49.769837   13083 out.go:239] * 
W0320 17:02:49.771175   13083 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0320 17:02:49.781822   13083 out.go:177] 

                                                
                                                

                                                
                                                
* The control-plane node functional-829000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-829000"
***
--- FAIL: TestFunctional/serial/LogsCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 logs --file /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialLogsFileCmd1176945995/001/logs.txt
functional_test.go:1224: expected minikube logs to include word: -"Linux"- but got 
***
==> Audit <==
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| Command |                                   Args                                   |       Profile        |  User   | Version |     Start Time      |      End Time       |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
| start   | -o=json --download-only                                                  | download-only-293000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT |                     |
|         | -p download-only-293000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.20.0                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT | 20 Mar 24 17:00 PDT |
| delete  | -p download-only-293000                                                  | download-only-293000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT | 20 Mar 24 17:00 PDT |
| start   | -o=json --download-only                                                  | download-only-885000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT |                     |
|         | -p download-only-885000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.29.3                                             |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| delete  | -p download-only-885000                                                  | download-only-885000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| start   | -o=json --download-only                                                  | download-only-684000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
|         | -p download-only-684000                                                  |                      |         |         |                     |                     |
|         | --force --alsologtostderr                                                |                      |         |         |                     |                     |
|         | --kubernetes-version=v1.30.0-beta.0                                      |                      |         |         |                     |                     |
|         | --container-runtime=docker                                               |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | --all                                                                    | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| delete  | -p download-only-684000                                                  | download-only-684000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| delete  | -p download-only-293000                                                  | download-only-293000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| delete  | -p download-only-885000                                                  | download-only-885000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| delete  | -p download-only-684000                                                  | download-only-684000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| start   | --download-only -p                                                       | binary-mirror-025000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
|         | binary-mirror-025000                                                     |                      |         |         |                     |                     |
|         | --alsologtostderr                                                        |                      |         |         |                     |                     |
|         | --binary-mirror                                                          |                      |         |         |                     |                     |
|         | http://127.0.0.1:52078                                                   |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| delete  | -p binary-mirror-025000                                                  | binary-mirror-025000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| addons  | enable dashboard -p                                                      | addons-367000        | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
|         | addons-367000                                                            |                      |         |         |                     |                     |
| addons  | disable dashboard -p                                                     | addons-367000        | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
|         | addons-367000                                                            |                      |         |         |                     |                     |
| start   | -p addons-367000 --wait=true                                             | addons-367000        | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
|         | --memory=4000 --alsologtostderr                                          |                      |         |         |                     |                     |
|         | --addons=registry                                                        |                      |         |         |                     |                     |
|         | --addons=metrics-server                                                  |                      |         |         |                     |                     |
|         | --addons=volumesnapshots                                                 |                      |         |         |                     |                     |
|         | --addons=csi-hostpath-driver                                             |                      |         |         |                     |                     |
|         | --addons=gcp-auth                                                        |                      |         |         |                     |                     |
|         | --addons=cloud-spanner                                                   |                      |         |         |                     |                     |
|         | --addons=inspektor-gadget                                                |                      |         |         |                     |                     |
|         | --addons=storage-provisioner-rancher                                     |                      |         |         |                     |                     |
|         | --addons=nvidia-device-plugin                                            |                      |         |         |                     |                     |
|         | --addons=yakd --driver=qemu2                                             |                      |         |         |                     |                     |
|         |  --addons=ingress                                                        |                      |         |         |                     |                     |
|         | --addons=ingress-dns                                                     |                      |         |         |                     |                     |
| delete  | -p addons-367000                                                         | addons-367000        | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
| start   | -p nospam-044000 -n=1 --memory=2250 --wait=false                         | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
|         | --log_dir=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 |                      |         |         |                     |                     |
|         | --driver=qemu2                                                           |                      |         |         |                     |                     |
| start   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| start   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | start --dry-run                                                          |                      |         |         |                     |                     |
| pause   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| pause   | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | pause                                                                    |                      |         |         |                     |                     |
| unpause | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| unpause | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | unpause                                                                  |                      |         |         |                     |                     |
| stop    | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| stop    | nospam-044000 --log_dir                                                  | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000           |                      |         |         |                     |                     |
|         | stop                                                                     |                      |         |         |                     |                     |
| delete  | -p nospam-044000                                                         | nospam-044000        | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
| start   | -p functional-829000                                                     | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | --memory=4000                                                            |                      |         |         |                     |                     |
|         | --apiserver-port=8441                                                    |                      |         |         |                     |                     |
|         | --wait=all --driver=qemu2                                                |                      |         |         |                     |                     |
| start   | -p functional-829000                                                     | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | --alsologtostderr -v=8                                                   |                      |         |         |                     |                     |
| cache   | functional-829000 cache add                                              | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | functional-829000 cache add                                              | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | functional-829000 cache add                                              | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-829000 cache add                                              | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | minikube-local-cache-test:functional-829000                              |                      |         |         |                     |                     |
| cache   | functional-829000 cache delete                                           | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | minikube-local-cache-test:functional-829000                              |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | registry.k8s.io/pause:3.3                                                |                      |         |         |                     |                     |
| cache   | list                                                                     | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
| ssh     | functional-829000 ssh sudo                                               | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | crictl images                                                            |                      |         |         |                     |                     |
| ssh     | functional-829000                                                        | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | ssh sudo docker rmi                                                      |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| ssh     | functional-829000 ssh                                                    | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | functional-829000 cache reload                                           | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
| ssh     | functional-829000 ssh                                                    | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | sudo crictl inspecti                                                     |                      |         |         |                     |                     |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | registry.k8s.io/pause:3.1                                                |                      |         |         |                     |                     |
| cache   | delete                                                                   | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT | 20 Mar 24 17:02 PDT |
|         | registry.k8s.io/pause:latest                                             |                      |         |         |                     |                     |
| kubectl | functional-829000 kubectl --                                             | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | --context functional-829000                                              |                      |         |         |                     |                     |
|         | get pods                                                                 |                      |         |         |                     |                     |
| start   | -p functional-829000                                                     | functional-829000    | jenkins | v1.32.0 | 20 Mar 24 17:02 PDT |                     |
|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                      |         |         |                     |                     |
|         | --wait=all                                                               |                      |         |         |                     |                     |
|---------|--------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|

                                                
                                                

                                                
                                                
==> Last Start <==
Log file created at: 2024/03/20 17:02:44
Running on machine: MacOS-M1-Agent-2
Binary: Built with gc go1.22.1 for darwin/arm64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0320 17:02:44.624527   13083 out.go:291] Setting OutFile to fd 1 ...
I0320 17:02:44.624624   13083 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:02:44.624626   13083 out.go:304] Setting ErrFile to fd 2...
I0320 17:02:44.624628   13083 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:02:44.624748   13083 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
I0320 17:02:44.625695   13083 out.go:298] Setting JSON to false
I0320 17:02:44.641461   13083 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7334,"bootTime":1710972030,"procs":508,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
W0320 17:02:44.641517   13083 start.go:137] gopshost.Virtualization returned error: not implemented yet
I0320 17:02:44.647499   13083 out.go:177] * [functional-829000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
I0320 17:02:44.657140   13083 out.go:177]   - MINIKUBE_LOCATION=17713
I0320 17:02:44.657176   13083 notify.go:220] Checking for updates...
I0320 17:02:44.666098   13083 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
I0320 17:02:44.669870   13083 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
I0320 17:02:44.673070   13083 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
I0320 17:02:44.676074   13083 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
I0320 17:02:44.679110   13083 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
I0320 17:02:44.682411   13083 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0320 17:02:44.682459   13083 driver.go:392] Setting default libvirt URI to qemu:///system
I0320 17:02:44.687058   13083 out.go:177] * Using the qemu2 driver based on existing profile
I0320 17:02:44.696050   13083 start.go:297] selected driver: qemu2
I0320 17:02:44.696054   13083 start.go:901] validating driver "qemu2" against &{Name:functional-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:functional-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0320 17:02:44.696111   13083 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0320 17:02:44.698500   13083 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
I0320 17:02:44.698554   13083 cni.go:84] Creating CNI manager for ""
I0320 17:02:44.698563   13083 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
I0320 17:02:44.698604   13083 start.go:340] cluster config:
{Name:functional-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:functional-829000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
I0320 17:02:44.703384   13083 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0320 17:02:44.712049   13083 out.go:177] * Starting "functional-829000" primary control-plane node in "functional-829000" cluster
I0320 17:02:44.715074   13083 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
I0320 17:02:44.715090   13083 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
I0320 17:02:44.715098   13083 cache.go:56] Caching tarball of preloaded images
I0320 17:02:44.715164   13083 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
I0320 17:02:44.715170   13083 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
I0320 17:02:44.715239   13083 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/functional-829000/config.json ...
I0320 17:02:44.715798   13083 start.go:360] acquireMachinesLock for functional-829000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0320 17:02:44.715836   13083 start.go:364] duration metric: took 33.167µs to acquireMachinesLock for "functional-829000"
I0320 17:02:44.715846   13083 start.go:96] Skipping create...Using existing machine configuration
I0320 17:02:44.715851   13083 fix.go:54] fixHost starting: 
I0320 17:02:44.715974   13083 fix.go:112] recreateIfNeeded on functional-829000: state=Stopped err=<nil>
W0320 17:02:44.715982   13083 fix.go:138] unexpected machine state, will restart: <nil>
I0320 17:02:44.723057   13083 out.go:177] * Restarting existing qemu2 VM for "functional-829000" ...
I0320 17:02:44.729091   13083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:96:84:42:b7:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/disk.qcow2
I0320 17:02:44.731319   13083 main.go:141] libmachine: STDOUT: 
I0320 17:02:44.731339   13083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0320 17:02:44.731371   13083 fix.go:56] duration metric: took 15.520708ms for fixHost
I0320 17:02:44.731380   13083 start.go:83] releasing machines lock for "functional-829000", held for 15.536625ms
W0320 17:02:44.731387   13083 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0320 17:02:44.731441   13083 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0320 17:02:44.731446   13083 start.go:728] Will try again in 5 seconds ...
I0320 17:02:49.732075   13083 start.go:360] acquireMachinesLock for functional-829000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0320 17:02:49.732545   13083 start.go:364] duration metric: took 320.291µs to acquireMachinesLock for "functional-829000"
I0320 17:02:49.732645   13083 start.go:96] Skipping create...Using existing machine configuration
I0320 17:02:49.732658   13083 fix.go:54] fixHost starting: 
I0320 17:02:49.733309   13083 fix.go:112] recreateIfNeeded on functional-829000: state=Stopped err=<nil>
W0320 17:02:49.733327   13083 fix.go:138] unexpected machine state, will restart: <nil>
I0320 17:02:49.742704   13083 out.go:177] * Restarting existing qemu2 VM for "functional-829000" ...
I0320 17:02:49.748906   13083 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 4000 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:96:84:42:b7:46 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/functional-829000/disk.qcow2
I0320 17:02:49.757835   13083 main.go:141] libmachine: STDOUT: 
I0320 17:02:49.757881   13083 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused

                                                
                                                
I0320 17:02:49.757966   13083 fix.go:56] duration metric: took 25.311583ms for fixHost
I0320 17:02:49.757978   13083 start.go:83] releasing machines lock for "functional-829000", held for 25.415958ms
W0320 17:02:49.758128   13083 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p functional-829000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
I0320 17:02:49.765641   13083 out.go:177] 
W0320 17:02:49.769806   13083 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
W0320 17:02:49.769837   13083 out.go:239] * 
W0320 17:02:49.771175   13083 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0320 17:02:49.781822   13083 out.go:177] 

                                                
                                                

                                                
                                                
***
--- FAIL: TestFunctional/serial/LogsFileCmd (0.08s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-829000 apply -f testdata/invalidsvc.yaml
functional_test.go:2317: (dbg) Non-zero exit: kubectl --context functional-829000 apply -f testdata/invalidsvc.yaml: exit status 1 (27.785042ms)

                                                
                                                
** stderr ** 
	error: context "functional-829000" does not exist

                                                
                                                
** /stderr **
functional_test.go:2319: kubectl --context functional-829000 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctional/serial/InvalidService (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-829000 --alsologtostderr -v=1]
functional_test.go:914: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-829000 --alsologtostderr -v=1] ...
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-829000 --alsologtostderr -v=1] stdout:
functional_test.go:906: (dbg) [out/minikube-darwin-arm64 dashboard --url --port 36195 -p functional-829000 --alsologtostderr -v=1] stderr:
I0320 17:03:42.929843   13417 out.go:291] Setting OutFile to fd 1 ...
I0320 17:03:42.930231   13417 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:03:42.930235   13417 out.go:304] Setting ErrFile to fd 2...
I0320 17:03:42.930238   13417 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:03:42.930390   13417 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
I0320 17:03:42.930601   13417 mustload.go:65] Loading cluster: functional-829000
I0320 17:03:42.930796   13417 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0320 17:03:42.932540   13417 out.go:177] * The control-plane node functional-829000 host is not running: state=Stopped
I0320 17:03:42.936080   13417 out.go:177]   To start a cluster, run: "minikube start -p functional-829000"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (43.4475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/DashboardCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 status
functional_test.go:850: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 status: exit status 7 (32.632209ms)

                                                
                                                
-- stdout --
	functional-829000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
functional_test.go:852: failed to run minikube status. args "out/minikube-darwin-arm64 -p functional-829000 status" : exit status 7
functional_test.go:856: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 7 (32.92025ms)

                                                
                                                
-- stdout --
	host:Stopped,kublet:Stopped,apiserver:Stopped,kubeconfig:Stopped

                                                
                                                
-- /stdout --
functional_test.go:858: failed to run minikube status with custom format: args "out/minikube-darwin-arm64 -p functional-829000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 7
functional_test.go:868: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 status -o json
functional_test.go:868: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 status -o json: exit status 7 (32.226875ms)

                                                
                                                
-- stdout --
	{"Name":"functional-829000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:870: failed to run minikube status with json output. args "out/minikube-darwin-arm64 -p functional-829000 status -o json" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (32.635583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/StatusCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-829000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1623: (dbg) Non-zero exit: kubectl --context functional-829000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.752209ms)

                                                
                                                
** stderr ** 
	error: context "functional-829000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1629: failed to create hello-node deployment with this command "kubectl --context functional-829000 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
functional_test.go:1594: service test failed - dumping debug information
functional_test.go:1595: -----------------------service failure post-mortem--------------------------------
functional_test.go:1598: (dbg) Run:  kubectl --context functional-829000 describe po hello-node-connect
functional_test.go:1598: (dbg) Non-zero exit: kubectl --context functional-829000 describe po hello-node-connect: exit status 1 (26.44325ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-829000

                                                
                                                
** /stderr **
functional_test.go:1600: "kubectl --context functional-829000 describe po hello-node-connect" failed: exit status 1
functional_test.go:1602: hello-node pod describe:
functional_test.go:1604: (dbg) Run:  kubectl --context functional-829000 logs -l app=hello-node-connect
functional_test.go:1604: (dbg) Non-zero exit: kubectl --context functional-829000 logs -l app=hello-node-connect: exit status 1 (26.580166ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-829000

                                                
                                                
** /stderr **
functional_test.go:1606: "kubectl --context functional-829000 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1608: hello-node logs:
functional_test.go:1610: (dbg) Run:  kubectl --context functional-829000 describe svc hello-node-connect
functional_test.go:1610: (dbg) Non-zero exit: kubectl --context functional-829000 describe svc hello-node-connect: exit status 1 (26.517792ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-829000

                                                
                                                
** /stderr **
functional_test.go:1612: "kubectl --context functional-829000 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1614: hello-node svc describe:
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (32.600958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:45: failed waiting for storage-provisioner: client config: context "functional-829000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (33.459167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "echo hello"
functional_test.go:1721: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "echo hello": exit status 83 (42.3975ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1726: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-829000 ssh \"echo hello\"" : exit status 83
functional_test.go:1730: expected minikube ssh command output to be -"hello"- but got *"* The control-plane node functional-829000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-829000\"\n"*. args "out/minikube-darwin-arm64 -p functional-829000 ssh \"echo hello\""
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "cat /etc/hostname": exit status 83 (48.062917ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1744: failed to run an ssh command. args "out/minikube-darwin-arm64 -p functional-829000 ssh \"cat /etc/hostname\"" : exit status 83
functional_test.go:1748: expected minikube ssh command output to be -"functional-829000"- but got *"* The control-plane node functional-829000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-829000\"\n"*. args "out/minikube-darwin-arm64 -p functional-829000 ssh \"cat /etc/hostname\""
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (32.285542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/SSHCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 cp testdata/cp-test.txt /home/docker/cp-test.txt: exit status 83 (51.634333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-829000 cp testdata/cp-test.txt /home/docker/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh -n functional-829000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh -n functional-829000 "sudo cat /home/docker/cp-test.txt": exit status 83 (45.005708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-829000 ssh -n functional-829000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-829000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-829000\"\n",
  }, "")
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 cp functional-829000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3942821273/001/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 cp functional-829000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3942821273/001/cp-test.txt: exit status 83 (49.578084ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-829000 cp functional-829000:/home/docker/cp-test.txt /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3942821273/001/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh -n functional-829000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh -n functional-829000 "sudo cat /home/docker/cp-test.txt": exit status 83 (42.779166ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-829000 ssh -n functional-829000 \"sudo cat /home/docker/cp-test.txt\"" : exit status 83
helpers_test.go:528: failed to read test file 'testdata/cp-test.txt' : open /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelCpCmd3942821273/001/cp-test.txt: no such file or directory
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"* The control-plane node functional-829000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-829000\"\n",
+ 	"",
  )
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt: exit status 83 (45.00625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
helpers_test.go:561: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-829000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt" : exit status 83
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh -n functional-829000 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh -n functional-829000 "sudo cat /tmp/does/not/exist/cp-test.txt": exit status 83 (46.830416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
helpers_test.go:539: failed to run an cp command. args "out/minikube-darwin-arm64 -p functional-829000 ssh -n functional-829000 \"sudo cat /tmp/does/not/exist/cp-test.txt\"" : exit status 83
helpers_test.go:573: /testdata/cp-test.txt content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file cp process",
+ 	"he control-plane node functional-829000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-829000\"\n",
  }, "")
--- FAIL: TestFunctional/parallel/CpCmd (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/12665/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "sudo cat /etc/test/nested/copy/12665/hosts"
functional_test.go:1927: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "sudo cat /etc/test/nested/copy/12665/hosts": exit status 83 (44.478041ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1929: out/minikube-darwin-arm64 -p functional-829000 ssh "sudo cat /etc/test/nested/copy/12665/hosts" failed: exit status 83
functional_test.go:1932: file sync test content: * The control-plane node functional-829000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-829000"
functional_test.go:1942: /etc/sync.test content mismatch (-want +got):
  strings.Join({
+ 	"* ",
  	"T",
- 	"est file for checking file sync process",
+ 	"he control-plane node functional-829000 host is not running: sta",
+ 	"te=Stopped\n  To start a cluster, run: \"minikube start -p functio",
+ 	"nal-829000\"\n",
  }, "")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (32.360916ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/FileSync (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/12665.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "sudo cat /etc/ssl/certs/12665.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "sudo cat /etc/ssl/certs/12665.pem": exit status 83 (47.315667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/12665.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-829000 ssh \"sudo cat /etc/ssl/certs/12665.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/12665.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-829000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-829000"
  	"""
  )
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/12665.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "sudo cat /usr/share/ca-certificates/12665.pem"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "sudo cat /usr/share/ca-certificates/12665.pem": exit status 83 (41.694834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/usr/share/ca-certificates/12665.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-829000 ssh \"sudo cat /usr/share/ca-certificates/12665.pem\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /usr/share/ca-certificates/12665.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-829000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-829000"
  	"""
  )
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "sudo cat /etc/ssl/certs/51391683.0": exit status 83 (43.674292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1971: failed to check existence of "/etc/ssl/certs/51391683.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-829000 ssh \"sudo cat /etc/ssl/certs/51391683.0\"": exit status 83
functional_test.go:1977: failed verify pem file. minikube_test.pem -> /etc/ssl/certs/51391683.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIDsDCCApgCCQD5n0OIsOYIjDANBgkqhkiG9w0BAQsFADCBmTELMAkGA1UEBhMC
- 	VVMxEzARBgNVBAgMCkNhbGlmb3JuaWExFjAUBgNVBAcMDVNhbiBGcmFuY2lzY28x
- 	ETAPBgNVBAoMCG1pbmlrdWJlMRYwFAYDVQQLDA1QYXJ0eSBQYXJyb3RzMREwDwYD
- 	VQQDDAhtaW5pa3ViZTEfMB0GCSqGSIb3DQEJARYQbWluaWt1YmVAY25jZi5pbzAe
- 	Fw0yMDAzMDQyMTU2MjZaFw0yMTAzMDQyMTU2MjZaMIGZMQswCQYDVQQGEwJVUzET
- 	MBEGA1UECAwKQ2FsaWZvcm5pYTEWMBQGA1UEBwwNU2FuIEZyYW5jaXNjbzERMA8G
- 	A1UECgwIbWluaWt1YmUxFjAUBgNVBAsMDVBhcnR5IFBhcnJvdHMxETAPBgNVBAMM
- 	CG1pbmlrdWJlMR8wHQYJKoZIhvcNAQkBFhBtaW5pa3ViZUBjbmNmLmlvMIIBIjAN
- 	BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA/qVMQ/234ul5yWI1yaHvV4pZ5Ffy
- 	M0bSMjzZUwlsvzerXzF3WrdpeZs5GzBNBWL/Db9KziGHCtfX9j5okJqPvB2lxdL5
- 	d5hFIYSORSemYLX2kdlnlykY5fzmFLKIUO9xXs0YNF4JUMEBgGK6n/BdLvXDUULZ
- 	26QOKs6+iH7TAL4RtozxQ8YXKQArdmpeAvxy2PSZGvVk1htKtyuKQsiFqH3oRleK
- 	3mljXfC5LsoIJHqd/8lAsckH87+IfwYnJ1CNJM2gueaCf+HmudVrvXfHaszh1Wh1
- 	9HKPE95Azi6CKoBGlRGFxt8UR72YIcTjC/lYxzbHeCpU7RCiXfsC0iMTlQIDAQAB
- 	MA0GCSqGSIb3DQEBCwUAA4IBAQBhsKnghyBki4NOnK5gHm7ow+7S+xvkjJhXBQ6i
- 	/xQD4/GCZ1tH5iFHXmo+bt4jB9hvKLyN5M5a8TlDwwqTLIoPDQJh37UpSCwbY/6z
- 	nE2aP3N2ue1/DeY60tgAh1c1uJDMeTiFbalJqSkneaHpNfvEQhUORFoN4yQSVEYg
- 	+T9mzTAWQ55TeBgbRevmA25nXHdPAae1MvJWDeG+aJfhq1I2hCwaitJ3iSwgn2ew
- 	637It/aBkMLhsCKTHxlXDGUX401ddbc0ZiC308cyMbis3iBeh4RBjkFxP8eIWFmK
- 	sos/dyqdua742L1cOKYFbLJfjA1VyxJQUxQvWKkbaq0xi7ao
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-829000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-829000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/126652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "sudo cat /etc/ssl/certs/126652.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "sudo cat /etc/ssl/certs/126652.pem": exit status 83 (39.608834ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/126652.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-829000 ssh \"sudo cat /etc/ssl/certs/126652.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/126652.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-829000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-829000"
  	"""
  )
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/126652.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "sudo cat /usr/share/ca-certificates/126652.pem"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "sudo cat /usr/share/ca-certificates/126652.pem": exit status 83 (41.796667ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/usr/share/ca-certificates/126652.pem" inside minikube. args "out/minikube-darwin-arm64 -p functional-829000 ssh \"sudo cat /usr/share/ca-certificates/126652.pem\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /usr/share/ca-certificates/126652.pem mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-829000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-829000"
  	"""
  )
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": exit status 83 (52.644125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1998: failed to check existence of "/etc/ssl/certs/3ec20f2e.0" inside minikube. args "out/minikube-darwin-arm64 -p functional-829000 ssh \"sudo cat /etc/ssl/certs/3ec20f2e.0\"": exit status 83
functional_test.go:2004: failed verify pem file. minikube_test2.pem -> /etc/ssl/certs/3ec20f2e.0 mismatch (-want +got):
  (
  	"""
- 	-----BEGIN CERTIFICATE-----
- 	MIIEwDCCAqgCCQCUeXrVemI4eTANBgkqhkiG9w0BAQsFADAiMQswCQYDVQQGEwJV
- 	UzETMBEGA1UECAwKQ2FsaWZvcm5pYTAeFw0yMTA3MjEyMDM4MDdaFw0yMTA4MjAy
- 	MDM4MDdaMCIxCzAJBgNVBAYTAlVTMRMwEQYDVQQIDApDYWxpZm9ybmlhMIICIjAN
- 	BgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAu1+sCiosrYIy83a+KtPdoGsKun+O
- 	jKhETWQrum5FGxqKyQzhHN8q6iZBI94m1sZb2xIJDcKaAsHpKf1z/5pkrWQW4JeL
- 	riqJ5U0kENy6mWGoFzPLQHkKMXSYUEkRjrgJphh5zLkWDzx6uwxHIrs5yaIwn71Z
- 	enYezHL8NyMd10+up1lNvH+xGqnSJ3ge7Rj+r5XbA3avvaJ9AVdiJMU5ArMw4Snu
- 	dLDSUueWFedmXoQud083EvDbMY61blNKUR6BKBJLPopH+9NnUP7FRU25lvTsA8qT
- 	zz/KertMFhCfwvK4y7a7+GbIPD3fItpm23GORuViEGgeRMH7GGSNVfm4VtK5mI4c
- 	XK9BrCE+FXowHOWU3MTH1qsvTigd3JZc8BKTaePJGMIDT1HIAaoK7SOoZOLka7bi
- 	IiLaojrIWY6tE5IIl46cQWLg09G+kjKVfEEvEHNe0t22I9etGeUUEwlcMITHmEdE
- 	WrXytC8dSNFObSG5B2MX2Ovm+yNblcK7TI7lW/tkbxtKs56J1mNmi4LXXBM8FdF8
- 	w9MpJc+ySVj2+f+eBE08amKjC9VVbBzNSw7MRaI9fFY5AAifJ8g55F3/KCNq5aTd
- 	rBADtAa5kQkTVjfMBhG++0Ow4f55hm73oJAy/qxb09OY7Vk9ky/K3l8GfWv8ozIF
- 	w+Oq6vdsspvtVJ8CAwEAATANBgkqhkiG9w0BAQsFAAOCAgEAGKVxsf13kYGaQJ+J
- 	6eyRZXlV5Bp+9EGtMPGsuVv2HJa4oxMBn7Xc/bUhjY9Is/ZwfOpPUPO/nQtSSPmO
- 	aozQj/27p8HDTW201fwLNiZMcppBdJvIQdDzCh6e2ikg3lqsw2BoLX1vbgc9HPml
- 	P8QCHEz2lricGdTuMRtBgH5x/ZkZGLbADQBeyoPTsPaQceRt5hPYXWifqiHhcJoL
- 	2T+XgbaHJ4lEhCU0IXJG0vlLuAyxQzO3gMeHK8BlLt/h/JCiDndo63a4XCkenmY8
- 	8/6Y9Lgh+O3954YgwdXBIS33CzhY7c+tfpag1hwpDHro/zsyLwdN2JxZqWymfg8T
- 	RyIeJ5VpY+CGm1fVTx84twQbiM241eZDYaW6Ap5/ZuxPbzY++KDMZHMuJYURorAU
- 	JE/SE6WltXpKTvtzyPNKt8fgPQmFzlBosDZCf/5EiyVqbLzideF1l+Rd//5gRBg0
- 	B63fOlCdxGtDeX1gKzekV4sXQIsWKW1BSCEInJt5lOS8Ex4JTXy8crwKo7hv0zPc
- 	sOjAXbtDYlPf/jPFGKzkgFACB87Bx4ZUZMp/ShOzjMt20MLGLJvCGCQIHetUz+GG
- 	/LTwSNhgWCheYkbDwXhCmqbN249xE3fNHC6zQ/IMJ30v/UWN4RldGfFzmoVBRUUX
- 	eQ7g5kXER7H/Lh/2V9FyaRhPpZM=
- 	-----END CERTIFICATE-----
+ 	* The control-plane node functional-829000 host is not running: state=Stopped
+ 	  To start a cluster, run: "minikube start -p functional-829000"
  	"""
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (33.138625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/CertSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-829000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:218: (dbg) Non-zero exit: kubectl --context functional-829000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (25.860958ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-829000

                                                
                                                
** /stderr **
functional_test.go:220: failed to 'kubectl get nodes' with args "kubectl --context functional-829000 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:226: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-829000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-829000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-829000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-829000

                                                
                                                
** /stderr **
functional_test.go:226: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	Error in configuration: context was not found for specified context: functional-829000

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p functional-829000 -n functional-829000: exit status 7 (34.772583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "functional-829000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "sudo systemctl is-active crio": exit status 83 (41.337083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:2026: output of 
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --: exit status 83
functional_test.go:2029: For runtime "docker": expected "crio" to be inactive but got "* The control-plane node functional-829000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-829000\"\n" 
--- FAIL: TestFunctional/parallel/NonActiveRuntimeDisabled (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 version -o=json --components
functional_test.go:2266: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 version -o=json --components: exit status 83 (43.912666ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:2268: error version: exit status 83
functional_test.go:2273: expected to see "buildctl" in the minikube version --components but got:
* The control-plane node functional-829000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-829000"
functional_test.go:2273: expected to see "commit" in the minikube version --components but got:
* The control-plane node functional-829000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-829000"
functional_test.go:2273: expected to see "containerd" in the minikube version --components but got:
* The control-plane node functional-829000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-829000"
functional_test.go:2273: expected to see "crictl" in the minikube version --components but got:
* The control-plane node functional-829000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-829000"
functional_test.go:2273: expected to see "crio" in the minikube version --components but got:
* The control-plane node functional-829000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-829000"
functional_test.go:2273: expected to see "ctr" in the minikube version --components but got:
* The control-plane node functional-829000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-829000"
functional_test.go:2273: expected to see "docker" in the minikube version --components but got:
* The control-plane node functional-829000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-829000"
functional_test.go:2273: expected to see "minikubeVersion" in the minikube version --components but got:
* The control-plane node functional-829000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-829000"
functional_test.go:2273: expected to see "podman" in the minikube version --components but got:
* The control-plane node functional-829000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-829000"
functional_test.go:2273: expected to see "crun" in the minikube version --components but got:
* The control-plane node functional-829000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-829000"
--- FAIL: TestFunctional/parallel/Version/components (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-829000 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-829000 image ls --format short --alsologtostderr:
I0320 17:03:43.342626   13432 out.go:291] Setting OutFile to fd 1 ...
I0320 17:03:43.342808   13432 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:03:43.342811   13432 out.go:304] Setting ErrFile to fd 2...
I0320 17:03:43.342813   13432 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:03:43.342950   13432 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
I0320 17:03:43.343406   13432 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0320 17:03:43.343471   13432 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-829000 image ls --format table --alsologtostderr:
|-------|-----|----------|------|
| Image | Tag | Image ID | Size |
|-------|-----|----------|------|
|-------|-----|----------|------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-829000 image ls --format table --alsologtostderr:
I0320 17:03:43.577920   13444 out.go:291] Setting OutFile to fd 1 ...
I0320 17:03:43.578094   13444 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:03:43.578097   13444 out.go:304] Setting ErrFile to fd 2...
I0320 17:03:43.578099   13444 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:03:43.578216   13444 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
I0320 17:03:43.578622   13444 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0320 17:03:43.578686   13444 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected | registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListTable (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-829000 image ls --format json --alsologtostderr:
[]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-829000 image ls --format json --alsologtostderr:
I0320 17:03:43.540846   13442 out.go:291] Setting OutFile to fd 1 ...
I0320 17:03:43.541006   13442 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:03:43.541009   13442 out.go:304] Setting ErrFile to fd 2...
I0320 17:03:43.541012   13442 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:03:43.541153   13442 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
I0320 17:03:43.541580   13442 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0320 17:03:43.541646   13442 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected ["registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListJson (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-arm64 -p functional-829000 image ls --format yaml --alsologtostderr:
[]

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-829000 image ls --format yaml --alsologtostderr:
I0320 17:03:43.380981   13434 out.go:291] Setting OutFile to fd 1 ...
I0320 17:03:43.381129   13434 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:03:43.381133   13434 out.go:304] Setting ErrFile to fd 2...
I0320 17:03:43.381135   13434 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:03:43.381256   13434 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
I0320 17:03:43.381675   13434 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0320 17:03:43.381743   13434 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
functional_test.go:274: expected - registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListYaml (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh pgrep buildkitd: exit status 83 (44.7105ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image build -t localhost/my-image:functional-829000 testdata/build --alsologtostderr
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-arm64 -p functional-829000 image build -t localhost/my-image:functional-829000 testdata/build --alsologtostderr:
I0320 17:03:43.463726   13438 out.go:291] Setting OutFile to fd 1 ...
I0320 17:03:43.464538   13438 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:03:43.464542   13438 out.go:304] Setting ErrFile to fd 2...
I0320 17:03:43.464544   13438 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:03:43.464699   13438 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
I0320 17:03:43.465130   13438 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0320 17:03:43.465566   13438 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0320 17:03:43.465794   13438 build_images.go:133] succeeded building to: 
I0320 17:03:43.465798   13438 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image ls
functional_test.go:442: expected "localhost/my-image:functional-829000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageBuild (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-829000 docker-env) && out/minikube-darwin-arm64 status -p functional-829000"
functional_test.go:495: (dbg) Non-zero exit: /bin/bash -c "eval $(out/minikube-darwin-arm64 -p functional-829000 docker-env) && out/minikube-darwin-arm64 status -p functional-829000": exit status 1 (46.003833ms)
functional_test.go:501: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctional/parallel/DockerEnv/bash (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 update-context --alsologtostderr -v=2: exit status 83 (45.686875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:03:43.212098   13426 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:03:43.212501   13426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:03:43.212505   13426 out.go:304] Setting ErrFile to fd 2...
	I0320 17:03:43.212507   13426 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:03:43.212670   13426 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:03:43.212945   13426 mustload.go:65] Loading cluster: functional-829000
	I0320 17:03:43.213125   13426 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:03:43.218104   13426 out.go:177] * The control-plane node functional-829000 host is not running: state=Stopped
	I0320 17:03:43.222019   13426 out.go:177]   To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-829000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-829000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-829000\"\n", want=*"No changes"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_changes (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 update-context --alsologtostderr -v=2: exit status 83 (42.619875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:03:43.299600   13430 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:03:43.299740   13430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:03:43.299744   13430 out.go:304] Setting ErrFile to fd 2...
	I0320 17:03:43.299746   13430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:03:43.299873   13430 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:03:43.300101   13430 mustload.go:65] Loading cluster: functional-829000
	I0320 17:03:43.300298   13430 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:03:43.304140   13430 out.go:177] * The control-plane node functional-829000 host is not running: state=Stopped
	I0320 17:03:43.308110   13430 out.go:177]   To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-829000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-829000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-829000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 update-context --alsologtostderr -v=2: exit status 83 (41.681875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:03:43.257681   13428 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:03:43.257823   13428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:03:43.257827   13428 out.go:304] Setting ErrFile to fd 2...
	I0320 17:03:43.257829   13428 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:03:43.257955   13428 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:03:43.258223   13428 mustload.go:65] Loading cluster: functional-829000
	I0320 17:03:43.258445   13428 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:03:43.260448   13428 out.go:177] * The control-plane node functional-829000 host is not running: state=Stopped
	I0320 17:03:43.264064   13428 out.go:177]   To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
** /stderr **
functional_test.go:2117: failed to run minikube update-context: args "out/minikube-darwin-arm64 -p functional-829000 update-context --alsologtostderr -v=2": exit status 83
functional_test.go:2122: update-context: got="* The control-plane node functional-829000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-829000\"\n", want=*"context has been updated"*
--- FAIL: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-829000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1433: (dbg) Non-zero exit: kubectl --context functional-829000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8: exit status 1 (26.380125ms)

                                                
                                                
** stderr ** 
	error: context "functional-829000" does not exist

                                                
                                                
** /stderr **
functional_test.go:1439: failed to create hello-node deployment with this command "kubectl --context functional-829000 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8": exit status 1.
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 service list
functional_test.go:1455: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 service list: exit status 83 (50.584875ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1457: failed to do service list. args "out/minikube-darwin-arm64 -p functional-829000 service list" : exit status 83
functional_test.go:1460: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-829000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-829000\"\n"-
--- FAIL: TestFunctional/parallel/ServiceCmd/List (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 service list -o json
functional_test.go:1485: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 service list -o json: exit status 83 (43.801083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1487: failed to list services with json format. args "out/minikube-darwin-arm64 -p functional-829000 service list -o json": exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/JSONOutput (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 service --namespace=default --https --url hello-node: exit status 83 (44.70575ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1507: failed to get service url. args "out/minikube-darwin-arm64 -p functional-829000 service --namespace=default --https --url hello-node" : exit status 83
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 service hello-node --url --format={{.IP}}: exit status 83 (44.932083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-darwin-arm64 -p functional-829000 service hello-node --url --format={{.IP}}": exit status 83
functional_test.go:1544: "* The control-plane node functional-829000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-829000\"" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 service hello-node --url: exit status 83 (43.698708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test.go:1557: failed to get service url. args: "out/minikube-darwin-arm64 -p functional-829000 service hello-node --url": exit status 83
functional_test.go:1561: found endpoint for hello-node: * The control-plane node functional-829000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-829000"
functional_test.go:1565: failed to parse "* The control-plane node functional-829000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-829000\"": parse "* The control-plane node functional-829000 host is not running: state=Stopped\n  To start a cluster, run: \"minikube start -p functional-829000\"": net/url: invalid control character in URL
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-829000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-829000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 83. stderr: I0320 17:02:52.822488   13201 out.go:291] Setting OutFile to fd 1 ...
I0320 17:02:52.822649   13201 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:02:52.822652   13201 out.go:304] Setting ErrFile to fd 2...
I0320 17:02:52.822655   13201 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:02:52.822783   13201 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
I0320 17:02:52.823057   13201 mustload.go:65] Loading cluster: functional-829000
I0320 17:02:52.823252   13201 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0320 17:02:52.827061   13201 out.go:177] * The control-plane node functional-829000 host is not running: state=Stopped
I0320 17:02:52.838041   13201 out.go:177]   To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
stdout: * The control-plane node functional-829000 host is not running: state=Stopped
To start a cluster, run: "minikube start -p functional-829000"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-829000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 13202: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-829000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-829000 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-arm64 -p functional-829000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-829000 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-darwin-arm64 -p functional-829000 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:208: failed to get Kubernetes client for "functional-829000": client config: context "functional-829000" does not exist
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (113.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-829000 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-829000 get svc nginx-svc: exit status 1 (69.258166ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: functional-829000

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-829000 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (113.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image load --daemon gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-arm64 -p functional-829000 image load --daemon gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr: (1.316833458s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-829000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image load --daemon gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-darwin-arm64 -p functional-829000 image load --daemon gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr: (1.313479917s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-829000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (5.450391833s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-829000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image load --daemon gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-arm64 -p functional-829000 image load --daemon gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr: (1.166747334s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-829000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image save gcr.io/google-containers/addon-resizer:functional-829000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/Users/jenkins/workspace/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image ls
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-829000" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:319: (dbg) Non-zero exit: dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A: exit status 9 (15.02713475s)

                                                
                                                
-- stdout --
	
	; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
	; (1 server found)
	;; global options: +cmd
	;; connection timed out; no servers could be reached

                                                
                                                
-- /stdout --
functional_test_tunnel_test.go:322: failed to resolve DNS name: exit status 9
functional_test_tunnel_test.go:329: expected body to contain "ANSWER: 1", but got *"\n; <<>> DiG 9.10.6 <<>> +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A\n; (1 server found)\n;; global options: +cmd\n;; connection timed out; no servers could be reached\n"*
functional_test_tunnel_test.go:332: (dbg) Run:  scutil --dns
functional_test_tunnel_test.go:336: debug for DNS configuration:
DNS configuration

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
flags    : Request A records
reach    : 0x00000002 (Reachable)

                                                
                                                
resolver #2
domain   : local
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300000

                                                
                                                
resolver #3
domain   : 254.169.in-addr.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300200

                                                
                                                
resolver #4
domain   : 8.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300400

                                                
                                                
resolver #5
domain   : 9.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300600

                                                
                                                
resolver #6
domain   : a.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 300800

                                                
                                                
resolver #7
domain   : b.e.f.ip6.arpa
options  : mdns
timeout  : 5
flags    : Request A records
reach    : 0x00000000 (Not Reachable)
order    : 301000

                                                
                                                
resolver #8
domain   : cluster.local
nameserver[0] : 10.96.0.10
flags    : Request A records
reach    : 0x00000002 (Reachable)
order    : 1

                                                
                                                
DNS configuration (for scoped queries)

                                                
                                                
resolver #1
nameserver[0] : 207.254.72.253
nameserver[1] : 207.254.72.254
nameserver[2] : 8.8.8.8
if_index : 15 (en0)
flags    : Scoped, Request A records
reach    : 0x00000002 (Reachable)
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (21.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:419: failed to hit nginx with DNS forwarded "http://nginx-svc.default.svc.cluster.local.": Temporary Error: Get "http://nginx-svc.default.svc.cluster.local.": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:426: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (21.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (9.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-936000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-936000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (9.807237375s)

                                                
                                                
-- stdout --
	* [ha-936000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "ha-936000" primary control-plane node in "ha-936000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "ha-936000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:05:33.711631   13498 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:05:33.711765   13498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:05:33.711768   13498 out.go:304] Setting ErrFile to fd 2...
	I0320 17:05:33.711771   13498 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:05:33.711893   13498 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:05:33.712990   13498 out.go:298] Setting JSON to false
	I0320 17:05:33.729111   13498 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7503,"bootTime":1710972030,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:05:33.729171   13498 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:05:33.735882   13498 out.go:177] * [ha-936000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:05:33.744668   13498 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:05:33.749836   13498 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:05:33.744708   13498 notify.go:220] Checking for updates...
	I0320 17:05:33.755766   13498 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:05:33.758780   13498 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:05:33.761783   13498 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:05:33.763251   13498 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:05:33.766998   13498 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:05:33.770796   13498 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:05:33.776738   13498 start.go:297] selected driver: qemu2
	I0320 17:05:33.776744   13498 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:05:33.776749   13498 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:05:33.779049   13498 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:05:33.782795   13498 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:05:33.785952   13498 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:05:33.785999   13498 cni.go:84] Creating CNI manager for ""
	I0320 17:05:33.786004   13498 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0320 17:05:33.786009   13498 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0320 17:05:33.786052   13498 start.go:340] cluster config:
	{Name:ha-936000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client Soc
ketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:05:33.790625   13498 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:05:33.798834   13498 out.go:177] * Starting "ha-936000" primary control-plane node in "ha-936000" cluster
	I0320 17:05:33.802794   13498 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:05:33.802811   13498 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:05:33.802824   13498 cache.go:56] Caching tarball of preloaded images
	I0320 17:05:33.802897   13498 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:05:33.802911   13498 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:05:33.803152   13498 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/ha-936000/config.json ...
	I0320 17:05:33.803164   13498 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/ha-936000/config.json: {Name:mka396120c070f416b3d76b1861316f6d7c5db60 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:05:33.803384   13498 start.go:360] acquireMachinesLock for ha-936000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:05:33.803417   13498 start.go:364] duration metric: took 27.166µs to acquireMachinesLock for "ha-936000"
	I0320 17:05:33.803431   13498 start.go:93] Provisioning new machine with config: &{Name:ha-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.3 ClusterName:ha-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:05:33.803462   13498 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:05:33.810711   13498 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:05:33.828585   13498 start.go:159] libmachine.API.Create for "ha-936000" (driver="qemu2")
	I0320 17:05:33.828615   13498 client.go:168] LocalClient.Create starting
	I0320 17:05:33.828681   13498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:05:33.828711   13498 main.go:141] libmachine: Decoding PEM data...
	I0320 17:05:33.828724   13498 main.go:141] libmachine: Parsing certificate...
	I0320 17:05:33.828776   13498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:05:33.828800   13498 main.go:141] libmachine: Decoding PEM data...
	I0320 17:05:33.828809   13498 main.go:141] libmachine: Parsing certificate...
	I0320 17:05:33.829209   13498 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:05:33.974593   13498 main.go:141] libmachine: Creating SSH key...
	I0320 17:05:34.026569   13498 main.go:141] libmachine: Creating Disk image...
	I0320 17:05:34.026573   13498 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:05:34.026756   13498 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2
	I0320 17:05:34.039054   13498 main.go:141] libmachine: STDOUT: 
	I0320 17:05:34.039074   13498 main.go:141] libmachine: STDERR: 
	I0320 17:05:34.039132   13498 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2 +20000M
	I0320 17:05:34.050118   13498 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:05:34.050133   13498 main.go:141] libmachine: STDERR: 
	I0320 17:05:34.050152   13498 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2
	I0320 17:05:34.050157   13498 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:05:34.050184   13498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:4f:f3:e9:f6:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2
	I0320 17:05:34.051916   13498 main.go:141] libmachine: STDOUT: 
	I0320 17:05:34.051931   13498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:05:34.051955   13498 client.go:171] duration metric: took 223.336208ms to LocalClient.Create
	I0320 17:05:36.054185   13498 start.go:128] duration metric: took 2.250708791s to createHost
	I0320 17:05:36.054352   13498 start.go:83] releasing machines lock for "ha-936000", held for 2.250897958s
	W0320 17:05:36.054446   13498 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:05:36.065663   13498 out.go:177] * Deleting "ha-936000" in qemu2 ...
	W0320 17:05:36.094545   13498 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:05:36.094581   13498 start.go:728] Will try again in 5 seconds ...
	I0320 17:05:41.095839   13498 start.go:360] acquireMachinesLock for ha-936000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:05:41.096227   13498 start.go:364] duration metric: took 310.209µs to acquireMachinesLock for "ha-936000"
	I0320 17:05:41.096355   13498 start.go:93] Provisioning new machine with config: &{Name:ha-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.29.3 ClusterName:ha-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:05:41.096622   13498 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:05:41.108345   13498 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:05:41.158921   13498 start.go:159] libmachine.API.Create for "ha-936000" (driver="qemu2")
	I0320 17:05:41.158964   13498 client.go:168] LocalClient.Create starting
	I0320 17:05:41.159066   13498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:05:41.159124   13498 main.go:141] libmachine: Decoding PEM data...
	I0320 17:05:41.159150   13498 main.go:141] libmachine: Parsing certificate...
	I0320 17:05:41.159210   13498 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:05:41.159254   13498 main.go:141] libmachine: Decoding PEM data...
	I0320 17:05:41.159268   13498 main.go:141] libmachine: Parsing certificate...
	I0320 17:05:41.159773   13498 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:05:41.315096   13498 main.go:141] libmachine: Creating SSH key...
	I0320 17:05:41.412072   13498 main.go:141] libmachine: Creating Disk image...
	I0320 17:05:41.412078   13498 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:05:41.412264   13498 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2
	I0320 17:05:41.424769   13498 main.go:141] libmachine: STDOUT: 
	I0320 17:05:41.424792   13498 main.go:141] libmachine: STDERR: 
	I0320 17:05:41.424854   13498 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2 +20000M
	I0320 17:05:41.435440   13498 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:05:41.435476   13498 main.go:141] libmachine: STDERR: 
	I0320 17:05:41.435492   13498 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2
	I0320 17:05:41.435497   13498 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:05:41.435523   13498 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:63:f0:ee:63:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2
	I0320 17:05:41.437235   13498 main.go:141] libmachine: STDOUT: 
	I0320 17:05:41.437253   13498 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:05:41.437265   13498 client.go:171] duration metric: took 278.298375ms to LocalClient.Create
	I0320 17:05:43.439489   13498 start.go:128] duration metric: took 2.342879792s to createHost
	I0320 17:05:43.439540   13498 start.go:83] releasing machines lock for "ha-936000", held for 2.343329958s
	W0320 17:05:43.439831   13498 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-936000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:05:43.454493   13498 out.go:177] 
	W0320 17:05:43.459549   13498 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:05:43.459579   13498 out.go:239] * 
	* 
	W0320 17:05:43.462095   13498 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:05:43.473417   13498 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 start -p ha-936000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (70.09175ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StartCluster (9.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (116.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml: exit status 1 (61.036833ms)

                                                
                                                
** stderr ** 
	error: cluster "ha-936000" does not exist

                                                
                                                
** /stderr **
ha_test.go:130: failed to create busybox deployment to ha (multi-control plane) cluster
ha_test.go:133: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- rollout status deployment/busybox: exit status 1 (59.177ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.511166ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.373ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.041333ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (108.158375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.9245ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.000042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.259458ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.717709ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.682125ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.71375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:140: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:140: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (104.643292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:143: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:159: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
ha_test.go:163: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:163: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (57.506083ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:165: failed get Pod names
ha_test.go:171: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- exec  -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- exec  -- nslookup kubernetes.io: exit status 1 (57.253167ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:173: Pod  could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- exec  -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- exec  -- nslookup kubernetes.default: exit status 1 (57.880375ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:183: Pod  could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.804209ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:191: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (31.996583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeployApp (116.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:199: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p ha-936000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.264416ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "ha-936000"

                                                
                                                
** /stderr **
ha_test.go:201: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (31.836417ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-936000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-936000 -v=7 --alsologtostderr: exit status 83 (46.266958ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-936000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-936000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:07:40.286978   13600 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:07:40.287529   13600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:40.287532   13600 out.go:304] Setting ErrFile to fd 2...
	I0320 17:07:40.287534   13600 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:40.287678   13600 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:07:40.287904   13600 mustload.go:65] Loading cluster: ha-936000
	I0320 17:07:40.288096   13600 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:07:40.292641   13600 out.go:177] * The control-plane node ha-936000 host is not running: state=Stopped
	I0320 17:07:40.296590   13600 out.go:177]   To start a cluster, run: "minikube start -p ha-936000"

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-936000 -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (32.317542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-936000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
ha_test.go:255: (dbg) Non-zero exit: kubectl --context ha-936000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.0825ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: ha-936000

                                                
                                                
** /stderr **
ha_test.go:257: failed to 'kubectl get nodes' with args "kubectl --context ha-936000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
ha_test.go:264: failed to decode json from label list: args "kubectl --context ha-936000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (32.53625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-936000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-936000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-936000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-936000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-936000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-936000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-936000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-936000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (33.233042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 status --output json -v=7 --alsologtostderr: exit status 7 (33.787166ms)

                                                
                                                
-- stdout --
	{"Name":"ha-936000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:07:40.535628   13613 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:07:40.535780   13613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:40.535783   13613 out.go:304] Setting ErrFile to fd 2...
	I0320 17:07:40.535785   13613 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:40.535923   13613 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:07:40.536051   13613 out.go:298] Setting JSON to true
	I0320 17:07:40.536063   13613 mustload.go:65] Loading cluster: ha-936000
	I0320 17:07:40.536102   13613 notify.go:220] Checking for updates...
	I0320 17:07:40.536273   13613 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:07:40.536283   13613 status.go:255] checking status of ha-936000 ...
	I0320 17:07:40.536502   13613 status.go:330] ha-936000 host status = "Stopped" (err=<nil>)
	I0320 17:07:40.536505   13613 status.go:343] host is not running, skipping remaining checks
	I0320 17:07:40.536508   13613 status.go:257] ha-936000 status: &{Name:ha-936000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:333: failed to decode json from status: args "out/minikube-darwin-arm64 -p ha-936000 status --output json -v=7 --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (33.516ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 node stop m02 -v=7 --alsologtostderr: exit status 85 (49.648958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:07:40.602811   13617 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:07:40.603136   13617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:40.603140   13617 out.go:304] Setting ErrFile to fd 2...
	I0320 17:07:40.603142   13617 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:40.603268   13617 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:07:40.603493   13617 mustload.go:65] Loading cluster: ha-936000
	I0320 17:07:40.603697   13617 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:07:40.608297   13617 out.go:177] 
	W0320 17:07:40.611322   13617 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0320 17:07:40.611327   13617 out.go:239] * 
	* 
	W0320 17:07:40.613326   13617 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_8ce24bb09be8aab84076d51946735f62cbf80299_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:07:40.616262   13617 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-darwin-arm64 -p ha-936000 node stop m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:369: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr: exit status 7 (33.511834ms)

                                                
                                                
-- stdout --
	ha-936000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:07:40.653112   13619 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:07:40.653254   13619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:40.653260   13619 out.go:304] Setting ErrFile to fd 2...
	I0320 17:07:40.653263   13619 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:40.653395   13619 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:07:40.653528   13619 out.go:298] Setting JSON to false
	I0320 17:07:40.653550   13619 mustload.go:65] Loading cluster: ha-936000
	I0320 17:07:40.653580   13619 notify.go:220] Checking for updates...
	I0320 17:07:40.653746   13619 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:07:40.653753   13619 status.go:255] checking status of ha-936000 ...
	I0320 17:07:40.653959   13619 status.go:330] ha-936000 host status = "Stopped" (err=<nil>)
	I0320 17:07:40.653963   13619 status.go:343] host is not running, skipping remaining checks
	I0320 17:07:40.653965   13619 status.go:257] ha-936000 status: &{Name:ha-936000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr": ha-936000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr": ha-936000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr": ha-936000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr": ha-936000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (33.938666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-936000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-936000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-936000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-936000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (32.156625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (43.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 node start m02 -v=7 --alsologtostderr: exit status 85 (54.409583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:07:40.829653   13629 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:07:40.829979   13629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:40.829983   13629 out.go:304] Setting ErrFile to fd 2...
	I0320 17:07:40.829986   13629 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:40.830121   13629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:07:40.830369   13629 mustload.go:65] Loading cluster: ha-936000
	I0320 17:07:40.830587   13629 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:07:40.835340   13629 out.go:177] 
	W0320 17:07:40.840211   13629 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
	W0320 17:07:40.840218   13629 out.go:239] * 
	* 
	W0320 17:07:40.842925   13629 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:07:40.847356   13629 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0320 17:07:40.829653   13629 out.go:291] Setting OutFile to fd 1 ...
I0320 17:07:40.829979   13629 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:07:40.829983   13629 out.go:304] Setting ErrFile to fd 2...
I0320 17:07:40.829986   13629 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:07:40.830121   13629 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
I0320 17:07:40.830369   13629 mustload.go:65] Loading cluster: ha-936000
I0320 17:07:40.830587   13629 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0320 17:07:40.835340   13629 out.go:177] 
W0320 17:07:40.840211   13629 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m02
W0320 17:07:40.840218   13629 out.go:239] * 
* 
W0320 17:07:40.842925   13629 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0320 17:07:40.847356   13629 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-darwin-arm64 -p ha-936000 node start m02 -v=7 --alsologtostderr": exit status 85
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr: exit status 7 (34.892042ms)

                                                
                                                
-- stdout --
	ha-936000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:07:40.886424   13631 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:07:40.886581   13631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:40.886586   13631 out.go:304] Setting ErrFile to fd 2...
	I0320 17:07:40.886589   13631 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:40.886706   13631 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:07:40.886841   13631 out.go:298] Setting JSON to false
	I0320 17:07:40.886854   13631 mustload.go:65] Loading cluster: ha-936000
	I0320 17:07:40.886887   13631 notify.go:220] Checking for updates...
	I0320 17:07:40.887060   13631 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:07:40.887066   13631 status.go:255] checking status of ha-936000 ...
	I0320 17:07:40.887281   13631 status.go:330] ha-936000 host status = "Stopped" (err=<nil>)
	I0320 17:07:40.887285   13631 status.go:343] host is not running, skipping remaining checks
	I0320 17:07:40.887287   13631 status.go:257] ha-936000 status: &{Name:ha-936000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr: exit status 7 (76.597958ms)

                                                
                                                
-- stdout --
	ha-936000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:07:41.899808   13633 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:07:41.900032   13633 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:41.900037   13633 out.go:304] Setting ErrFile to fd 2...
	I0320 17:07:41.900040   13633 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:41.900186   13633 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:07:41.900356   13633 out.go:298] Setting JSON to false
	I0320 17:07:41.900371   13633 mustload.go:65] Loading cluster: ha-936000
	I0320 17:07:41.900404   13633 notify.go:220] Checking for updates...
	I0320 17:07:41.900602   13633 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:07:41.900609   13633 status.go:255] checking status of ha-936000 ...
	I0320 17:07:41.900863   13633 status.go:330] ha-936000 host status = "Stopped" (err=<nil>)
	I0320 17:07:41.900868   13633 status.go:343] host is not running, skipping remaining checks
	I0320 17:07:41.900871   13633 status.go:257] ha-936000 status: &{Name:ha-936000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr: exit status 7 (76.550875ms)

                                                
                                                
-- stdout --
	ha-936000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:07:44.012280   13635 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:07:44.012502   13635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:44.012506   13635 out.go:304] Setting ErrFile to fd 2...
	I0320 17:07:44.012509   13635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:44.012670   13635 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:07:44.012841   13635 out.go:298] Setting JSON to false
	I0320 17:07:44.012858   13635 mustload.go:65] Loading cluster: ha-936000
	I0320 17:07:44.012901   13635 notify.go:220] Checking for updates...
	I0320 17:07:44.013156   13635 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:07:44.013165   13635 status.go:255] checking status of ha-936000 ...
	I0320 17:07:44.013437   13635 status.go:330] ha-936000 host status = "Stopped" (err=<nil>)
	I0320 17:07:44.013442   13635 status.go:343] host is not running, skipping remaining checks
	I0320 17:07:44.013445   13635 status.go:257] ha-936000 status: &{Name:ha-936000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr: exit status 7 (76.694667ms)

                                                
                                                
-- stdout --
	ha-936000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:07:46.766108   13637 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:07:46.766305   13637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:46.766310   13637 out.go:304] Setting ErrFile to fd 2...
	I0320 17:07:46.766314   13637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:46.766480   13637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:07:46.766678   13637 out.go:298] Setting JSON to false
	I0320 17:07:46.766697   13637 mustload.go:65] Loading cluster: ha-936000
	I0320 17:07:46.766748   13637 notify.go:220] Checking for updates...
	I0320 17:07:46.766972   13637 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:07:46.766983   13637 status.go:255] checking status of ha-936000 ...
	I0320 17:07:46.767268   13637 status.go:330] ha-936000 host status = "Stopped" (err=<nil>)
	I0320 17:07:46.767273   13637 status.go:343] host is not running, skipping remaining checks
	I0320 17:07:46.767276   13637 status.go:257] ha-936000 status: &{Name:ha-936000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr: exit status 7 (76.102458ms)

                                                
                                                
-- stdout --
	ha-936000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:07:48.732285   13639 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:07:48.732472   13639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:48.732476   13639 out.go:304] Setting ErrFile to fd 2...
	I0320 17:07:48.732479   13639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:48.732640   13639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:07:48.732802   13639 out.go:298] Setting JSON to false
	I0320 17:07:48.732816   13639 mustload.go:65] Loading cluster: ha-936000
	I0320 17:07:48.732859   13639 notify.go:220] Checking for updates...
	I0320 17:07:48.733095   13639 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:07:48.733104   13639 status.go:255] checking status of ha-936000 ...
	I0320 17:07:48.733360   13639 status.go:330] ha-936000 host status = "Stopped" (err=<nil>)
	I0320 17:07:48.733364   13639 status.go:343] host is not running, skipping remaining checks
	I0320 17:07:48.733367   13639 status.go:257] ha-936000 status: &{Name:ha-936000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr: exit status 7 (75.198084ms)

                                                
                                                
-- stdout --
	ha-936000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:07:51.932456   13641 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:07:51.932646   13641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:51.932650   13641 out.go:304] Setting ErrFile to fd 2...
	I0320 17:07:51.932653   13641 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:07:51.932808   13641 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:07:51.932961   13641 out.go:298] Setting JSON to false
	I0320 17:07:51.932976   13641 mustload.go:65] Loading cluster: ha-936000
	I0320 17:07:51.933013   13641 notify.go:220] Checking for updates...
	I0320 17:07:51.933256   13641 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:07:51.933267   13641 status.go:255] checking status of ha-936000 ...
	I0320 17:07:51.933540   13641 status.go:330] ha-936000 host status = "Stopped" (err=<nil>)
	I0320 17:07:51.933545   13641 status.go:343] host is not running, skipping remaining checks
	I0320 17:07:51.933548   13641 status.go:257] ha-936000 status: &{Name:ha-936000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr: exit status 7 (75.577ms)

                                                
                                                
-- stdout --
	ha-936000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:08:01.767654   13645 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:08:01.767882   13645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:01.767887   13645 out.go:304] Setting ErrFile to fd 2...
	I0320 17:08:01.767890   13645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:01.768073   13645 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:08:01.768217   13645 out.go:298] Setting JSON to false
	I0320 17:08:01.768235   13645 mustload.go:65] Loading cluster: ha-936000
	I0320 17:08:01.768266   13645 notify.go:220] Checking for updates...
	I0320 17:08:01.768475   13645 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:08:01.768484   13645 status.go:255] checking status of ha-936000 ...
	I0320 17:08:01.768781   13645 status.go:330] ha-936000 host status = "Stopped" (err=<nil>)
	I0320 17:08:01.768786   13645 status.go:343] host is not running, skipping remaining checks
	I0320 17:08:01.768789   13645 status.go:257] ha-936000 status: &{Name:ha-936000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr: exit status 7 (76.809ms)

                                                
                                                
-- stdout --
	ha-936000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:08:12.388826   13649 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:08:12.389025   13649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:12.389030   13649 out.go:304] Setting ErrFile to fd 2...
	I0320 17:08:12.389033   13649 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:12.389182   13649 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:08:12.389339   13649 out.go:298] Setting JSON to false
	I0320 17:08:12.389355   13649 mustload.go:65] Loading cluster: ha-936000
	I0320 17:08:12.389382   13649 notify.go:220] Checking for updates...
	I0320 17:08:12.389600   13649 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:08:12.389609   13649 status.go:255] checking status of ha-936000 ...
	I0320 17:08:12.389870   13649 status.go:330] ha-936000 host status = "Stopped" (err=<nil>)
	I0320 17:08:12.389875   13649 status.go:343] host is not running, skipping remaining checks
	I0320 17:08:12.389878   13649 status.go:257] ha-936000 status: &{Name:ha-936000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr: exit status 7 (76.134542ms)

                                                
                                                
-- stdout --
	ha-936000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:08:24.505065   13654 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:08:24.505244   13654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:24.505248   13654 out.go:304] Setting ErrFile to fd 2...
	I0320 17:08:24.505251   13654 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:24.505418   13654 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:08:24.505584   13654 out.go:298] Setting JSON to false
	I0320 17:08:24.505602   13654 mustload.go:65] Loading cluster: ha-936000
	I0320 17:08:24.505646   13654 notify.go:220] Checking for updates...
	I0320 17:08:24.505889   13654 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:08:24.505897   13654 status.go:255] checking status of ha-936000 ...
	I0320 17:08:24.506140   13654 status.go:330] ha-936000 host status = "Stopped" (err=<nil>)
	I0320 17:08:24.506145   13654 status.go:343] host is not running, skipping remaining checks
	I0320 17:08:24.506148   13654 status.go:257] ha-936000 status: &{Name:ha-936000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (33.993791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (43.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-936000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-936000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-936000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-936000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-936000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-936000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-936000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-936000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (32.547958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-936000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-arm64 stop -p ha-936000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-arm64 stop -p ha-936000 -v=7 --alsologtostderr: (3.716226042s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-936000 --wait=true -v=7 --alsologtostderr
ha_test.go:467: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-936000 --wait=true -v=7 --alsologtostderr: exit status 80 (5.236824833s)

                                                
                                                
-- stdout --
	* [ha-936000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-936000" primary control-plane node in "ha-936000" cluster
	* Restarting existing qemu2 VM for "ha-936000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-936000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:08:28.467128   13684 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:08:28.467318   13684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:28.467322   13684 out.go:304] Setting ErrFile to fd 2...
	I0320 17:08:28.467325   13684 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:28.467471   13684 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:08:28.468608   13684 out.go:298] Setting JSON to false
	I0320 17:08:28.487698   13684 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7678,"bootTime":1710972030,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:08:28.487757   13684 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:08:28.491792   13684 out.go:177] * [ha-936000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:08:28.504583   13684 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:08:28.499684   13684 notify.go:220] Checking for updates...
	I0320 17:08:28.511561   13684 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:08:28.515435   13684 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:08:28.522582   13684 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:08:28.525637   13684 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:08:28.528558   13684 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:08:28.531923   13684 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:08:28.531991   13684 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:08:28.536573   13684 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:08:28.543563   13684 start.go:297] selected driver: qemu2
	I0320 17:08:28.543571   13684 start.go:901] validating driver "qemu2" against &{Name:ha-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.29.3 ClusterName:ha-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:08:28.543651   13684 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:08:28.546243   13684 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:08:28.546287   13684 cni.go:84] Creating CNI manager for ""
	I0320 17:08:28.546293   13684 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0320 17:08:28.546353   13684 start.go:340] cluster config:
	{Name:ha-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-936000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:08:28.551225   13684 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:08:28.558576   13684 out.go:177] * Starting "ha-936000" primary control-plane node in "ha-936000" cluster
	I0320 17:08:28.562600   13684 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:08:28.562616   13684 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:08:28.562630   13684 cache.go:56] Caching tarball of preloaded images
	I0320 17:08:28.562690   13684 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:08:28.562699   13684 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:08:28.562766   13684 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/ha-936000/config.json ...
	I0320 17:08:28.563257   13684 start.go:360] acquireMachinesLock for ha-936000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:08:28.563292   13684 start.go:364] duration metric: took 28.375µs to acquireMachinesLock for "ha-936000"
	I0320 17:08:28.563302   13684 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:08:28.563308   13684 fix.go:54] fixHost starting: 
	I0320 17:08:28.563432   13684 fix.go:112] recreateIfNeeded on ha-936000: state=Stopped err=<nil>
	W0320 17:08:28.563442   13684 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:08:28.567602   13684 out.go:177] * Restarting existing qemu2 VM for "ha-936000" ...
	I0320 17:08:28.574579   13684 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:63:f0:ee:63:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2
	I0320 17:08:28.576833   13684 main.go:141] libmachine: STDOUT: 
	I0320 17:08:28.576856   13684 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:08:28.576887   13684 fix.go:56] duration metric: took 13.58ms for fixHost
	I0320 17:08:28.576892   13684 start.go:83] releasing machines lock for "ha-936000", held for 13.595625ms
	W0320 17:08:28.576900   13684 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:08:28.576953   13684 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:08:28.576959   13684 start.go:728] Will try again in 5 seconds ...
	I0320 17:08:33.578967   13684 start.go:360] acquireMachinesLock for ha-936000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:08:33.579261   13684 start.go:364] duration metric: took 214.292µs to acquireMachinesLock for "ha-936000"
	I0320 17:08:33.579404   13684 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:08:33.579426   13684 fix.go:54] fixHost starting: 
	I0320 17:08:33.580091   13684 fix.go:112] recreateIfNeeded on ha-936000: state=Stopped err=<nil>
	W0320 17:08:33.580119   13684 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:08:33.585536   13684 out.go:177] * Restarting existing qemu2 VM for "ha-936000" ...
	I0320 17:08:33.589652   13684 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:63:f0:ee:63:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2
	I0320 17:08:33.599107   13684 main.go:141] libmachine: STDOUT: 
	I0320 17:08:33.599204   13684 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:08:33.599310   13684 fix.go:56] duration metric: took 19.888041ms for fixHost
	I0320 17:08:33.599335   13684 start.go:83] releasing machines lock for "ha-936000", held for 20.048ms
	W0320 17:08:33.599591   13684 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-936000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-936000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:08:33.607437   13684 out.go:177] 
	W0320 17:08:33.611511   13684 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:08:33.611536   13684 out.go:239] * 
	* 
	W0320 17:08:33.614036   13684 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:08:33.621491   13684 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p ha-936000 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 node list -p ha-936000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (34.92325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (9.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 node delete m03 -v=7 --alsologtostderr: exit status 83 (42.570333ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-936000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-936000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:08:33.772883   13696 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:08:33.773455   13696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:33.773458   13696 out.go:304] Setting ErrFile to fd 2...
	I0320 17:08:33.773461   13696 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:33.773625   13696 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:08:33.773857   13696 mustload.go:65] Loading cluster: ha-936000
	I0320 17:08:33.774040   13696 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:08:33.776085   13696 out.go:177] * The control-plane node ha-936000 host is not running: state=Stopped
	I0320 17:08:33.779867   13696 out.go:177]   To start a cluster, run: "minikube start -p ha-936000"

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-darwin-arm64 -p ha-936000 node delete m03 -v=7 --alsologtostderr": exit status 83
ha_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr: exit status 7 (32.044208ms)

                                                
                                                
-- stdout --
	ha-936000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:08:33.815208   13698 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:08:33.815357   13698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:33.815363   13698 out.go:304] Setting ErrFile to fd 2...
	I0320 17:08:33.815364   13698 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:33.815488   13698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:08:33.815609   13698 out.go:298] Setting JSON to false
	I0320 17:08:33.815621   13698 mustload.go:65] Loading cluster: ha-936000
	I0320 17:08:33.815671   13698 notify.go:220] Checking for updates...
	I0320 17:08:33.815818   13698 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:08:33.815828   13698 status.go:255] checking status of ha-936000 ...
	I0320 17:08:33.816050   13698 status.go:330] ha-936000 host status = "Stopped" (err=<nil>)
	I0320 17:08:33.816054   13698 status.go:343] host is not running, skipping remaining checks
	I0320 17:08:33.816056   13698 status.go:257] ha-936000 status: &{Name:ha-936000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (32.118542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-936000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-936000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-936000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-936000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (32.268042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (3.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-arm64 -p ha-936000 stop -v=7 --alsologtostderr: (3.02156s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr: exit status 7 (66.203959ms)

                                                
                                                
-- stdout --
	ha-936000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:08:37.042097   13726 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:08:37.042271   13726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:37.042275   13726 out.go:304] Setting ErrFile to fd 2...
	I0320 17:08:37.042278   13726 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:37.042422   13726 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:08:37.042554   13726 out.go:298] Setting JSON to false
	I0320 17:08:37.042568   13726 mustload.go:65] Loading cluster: ha-936000
	I0320 17:08:37.042600   13726 notify.go:220] Checking for updates...
	I0320 17:08:37.042807   13726 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:08:37.042815   13726 status.go:255] checking status of ha-936000 ...
	I0320 17:08:37.043075   13726 status.go:330] ha-936000 host status = "Stopped" (err=<nil>)
	I0320 17:08:37.043079   13726 status.go:343] host is not running, skipping remaining checks
	I0320 17:08:37.043082   13726 status.go:257] ha-936000 status: &{Name:ha-936000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:543: status says not two control-plane nodes are present: args "out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr": ha-936000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr": ha-936000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:552: status says not two apiservers are stopped: args "out/minikube-darwin-arm64 -p ha-936000 status -v=7 --alsologtostderr": ha-936000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (33.968917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (3.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-arm64 start -p ha-936000 --wait=true -v=7 --alsologtostderr --driver=qemu2 
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p ha-936000 --wait=true -v=7 --alsologtostderr --driver=qemu2 : exit status 80 (5.193883s)

                                                
                                                
-- stdout --
	* [ha-936000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "ha-936000" primary control-plane node in "ha-936000" cluster
	* Restarting existing qemu2 VM for "ha-936000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "ha-936000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:08:37.107881   13730 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:08:37.108136   13730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:37.108142   13730 out.go:304] Setting ErrFile to fd 2...
	I0320 17:08:37.108145   13730 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:37.108318   13730 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:08:37.109663   13730 out.go:298] Setting JSON to false
	I0320 17:08:37.126100   13730 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7687,"bootTime":1710972030,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:08:37.126161   13730 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:08:37.130264   13730 out.go:177] * [ha-936000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:08:37.138094   13730 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:08:37.138186   13730 notify.go:220] Checking for updates...
	I0320 17:08:37.144958   13730 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:08:37.148037   13730 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:08:37.151087   13730 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:08:37.152569   13730 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:08:37.156064   13730 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:08:37.159378   13730 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:08:37.159666   13730 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:08:37.163894   13730 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:08:37.171043   13730 start.go:297] selected driver: qemu2
	I0320 17:08:37.171049   13730 start.go:901] validating driver "qemu2" against &{Name:ha-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.29.3 ClusterName:ha-936000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:08:37.171116   13730 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:08:37.173412   13730 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:08:37.173462   13730 cni.go:84] Creating CNI manager for ""
	I0320 17:08:37.173467   13730 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0320 17:08:37.173524   13730 start.go:340] cluster config:
	{Name:ha-936000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:ha-936000 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:08:37.177946   13730 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:08:37.186037   13730 out.go:177] * Starting "ha-936000" primary control-plane node in "ha-936000" cluster
	I0320 17:08:37.190075   13730 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:08:37.190091   13730 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:08:37.190100   13730 cache.go:56] Caching tarball of preloaded images
	I0320 17:08:37.190149   13730 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:08:37.190155   13730 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:08:37.190241   13730 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/ha-936000/config.json ...
	I0320 17:08:37.190739   13730 start.go:360] acquireMachinesLock for ha-936000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:08:37.190768   13730 start.go:364] duration metric: took 22.083µs to acquireMachinesLock for "ha-936000"
	I0320 17:08:37.190777   13730 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:08:37.190783   13730 fix.go:54] fixHost starting: 
	I0320 17:08:37.190909   13730 fix.go:112] recreateIfNeeded on ha-936000: state=Stopped err=<nil>
	W0320 17:08:37.190919   13730 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:08:37.199082   13730 out.go:177] * Restarting existing qemu2 VM for "ha-936000" ...
	I0320 17:08:37.203048   13730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:63:f0:ee:63:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2
	I0320 17:08:37.205223   13730 main.go:141] libmachine: STDOUT: 
	I0320 17:08:37.205248   13730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:08:37.205280   13730 fix.go:56] duration metric: took 14.497042ms for fixHost
	I0320 17:08:37.205285   13730 start.go:83] releasing machines lock for "ha-936000", held for 14.513209ms
	W0320 17:08:37.205300   13730 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:08:37.205340   13730 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:08:37.205345   13730 start.go:728] Will try again in 5 seconds ...
	I0320 17:08:42.207401   13730 start.go:360] acquireMachinesLock for ha-936000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:08:42.207782   13730 start.go:364] duration metric: took 265.375µs to acquireMachinesLock for "ha-936000"
	I0320 17:08:42.207919   13730 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:08:42.207943   13730 fix.go:54] fixHost starting: 
	I0320 17:08:42.208625   13730 fix.go:112] recreateIfNeeded on ha-936000: state=Stopped err=<nil>
	W0320 17:08:42.208650   13730 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:08:42.218021   13730 out.go:177] * Restarting existing qemu2 VM for "ha-936000" ...
	I0320 17:08:42.223255   13730 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/qemu.pid -device virtio-net-pci,netdev=net0,mac=76:63:f0:ee:63:3a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/ha-936000/disk.qcow2
	I0320 17:08:42.233643   13730 main.go:141] libmachine: STDOUT: 
	I0320 17:08:42.233727   13730 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:08:42.233829   13730 fix.go:56] duration metric: took 25.889958ms for fixHost
	I0320 17:08:42.233849   13730 start.go:83] releasing machines lock for "ha-936000", held for 26.045458ms
	W0320 17:08:42.234056   13730 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p ha-936000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p ha-936000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:08:42.241957   13730 out.go:177] 
	W0320 17:08:42.246131   13730 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:08:42.246158   13730 out.go:239] * 
	* 
	W0320 17:08:42.248553   13730 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:08:42.258008   13730 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-arm64 start -p ha-936000 --wait=true -v=7 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (71.262834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (5.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:413: expected profile "ha-936000" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-936000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-936000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-936000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3
\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSo
ck\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (32.118375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-arm64 node add -p ha-936000 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p ha-936000 --control-plane -v=7 --alsologtostderr: exit status 83 (44.431667ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-936000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-936000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:08:42.485880   13752 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:08:42.486028   13752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:42.486031   13752 out.go:304] Setting ErrFile to fd 2...
	I0320 17:08:42.486033   13752 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:08:42.486152   13752 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:08:42.486414   13752 mustload.go:65] Loading cluster: ha-936000
	I0320 17:08:42.486587   13752 config.go:182] Loaded profile config "ha-936000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:08:42.490991   13752 out.go:177] * The control-plane node ha-936000 host is not running: state=Stopped
	I0320 17:08:42.494919   13752 out.go:177]   To start a cluster, run: "minikube start -p ha-936000"

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-darwin-arm64 node add -p ha-936000 --control-plane -v=7 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (32.550792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (0.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
ha_test.go:304: expected profile "ha-936000" in json of 'profile list' to include 4 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-936000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-936000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPo
rt\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-936000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",\"ContainerRu
ntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHA
gentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
ha_test.go:307: expected profile "ha-936000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-936000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-936000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,
\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"ha-936000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.29.3\",
\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":\"\",\"SSHAuthSock\
":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p ha-936000 -n ha-936000: exit status 7 (32.43425ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-936000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.11s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (9.89s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-arm64 start -p image-367000 --driver=qemu2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p image-367000 --driver=qemu2 : exit status 80 (9.82349s)

                                                
                                                
-- stdout --
	* [image-367000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "image-367000" primary control-plane node in "image-367000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "image-367000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p image-367000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-darwin-arm64 start -p image-367000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p image-367000 -n image-367000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p image-367000 -n image-367000: exit status 7 (70.294791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "image-367000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestImageBuild/serial/Setup (9.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (9.78s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-503000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-503000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 : exit status 80 (9.776875333s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fc8e03bd-8a3c-47c2-82f1-6af658f2b8e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-503000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7514216-47de-4cb7-9813-01e2aec884f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17713"}}
	{"specversion":"1.0","id":"73779db3-91ee-4031-9845-27fdcc0ce5ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig"}}
	{"specversion":"1.0","id":"ccc40bdb-d89e-41af-a6f7-9dbeff6b120f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"1d14f4d0-14be-460f-8791-9c06d7f9060c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d1be8423-b62b-4dee-bdcd-78f66609173a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube"}}
	{"specversion":"1.0","id":"a5f74ecd-ae2e-4829-a75e-e55e101a3a4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"14ecf592-09c7-4c27-ad1d-1645a88d585f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the qemu2 driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"44c6ca77-b34e-409d-bffc-c68b4a4918ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Automatically selected the socket_vmnet network"}}
	{"specversion":"1.0","id":"1b3a9ee0-54a4-46b0-82fe-a47efbb7f260","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"json-output-503000\" primary control-plane node in \"json-output-503000\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"619ec3b2-667c-425b-b658-e13b21ffd629","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"a4259fcf-ace8-4c7d-a836-0b2940b44a26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Deleting \"json-output-503000\" in qemu2 ...","name":"Creating VM","totalsteps":"19"}}
	{"specversion":"1.0","id":"5c0ecc57-cdda-45a2-81ab-d357b44a42bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"StartHost failed, but will try again: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"6d15f639-29f5-4e0f-9be6-b9b58f0a457e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"9","message":"Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...","name":"Creating VM","totalsteps":"19"}}
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	{"specversion":"1.0","id":"400a8334-9c4b-4c14-9477-57d1b90a3542","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"Failed to start qemu2 VM. Running \"minikube delete -p json-output-503000\" may fix it: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1"}}
	{"specversion":"1.0","id":"46c9d99c-19b1-42ad-8c78-b41f0409b6f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to \"/var/run/socket_vmnet\": Connection refused: exit status 1","name":"GUEST_PROVISION","url":""}}
	{"specversion":"1.0","id":"a0d33858-f0b6-436f-a18e-0a002474240a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│
│\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 start -p json-output-503000 --output=json --user=testUser --memory=2200 --wait=true --driver=qemu2 ": exit status 80
json_output_test.go:213: unable to marshal output: OUTPUT: 
json_output_test.go:70: converting to cloud events: invalid character 'O' looking for beginning of value
--- FAIL: TestJSONOutput/start/Command (9.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.08s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 pause -p json-output-503000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p json-output-503000 --output=json --user=testUser: exit status 83 (79.467375ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1d06dad5-d187-4f33-ba2f-b472be07da3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"The control-plane node json-output-503000 host is not running: state=Stopped"}}
	{"specversion":"1.0","id":"1ac79977-8958-4a9e-835b-fa85358a5003","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"To start a cluster, run: \"minikube start -p json-output-503000\""}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 pause -p json-output-503000 --output=json --user=testUser": exit status 83
--- FAIL: TestJSONOutput/pause/Command (0.08s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.05s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 unpause -p json-output-503000 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-darwin-arm64 unpause -p json-output-503000 --output=json --user=testUser: exit status 83 (48.373417ms)

                                                
                                                
-- stdout --
	* The control-plane node json-output-503000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p json-output-503000"

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-darwin-arm64 unpause -p json-output-503000 --output=json --user=testUser": exit status 83
json_output_test.go:213: unable to marshal output: * The control-plane node json-output-503000 host is not running: state=Stopped
json_output_test.go:70: converting to cloud events: invalid character '*' looking for beginning of value
--- FAIL: TestJSONOutput/unpause/Command (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (10.25s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p first-357000 --driver=qemu2 
minikube_profile_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p first-357000 --driver=qemu2 : exit status 80 (9.799379958s)

                                                
                                                
-- stdout --
	* [first-357000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "first-357000" primary control-plane node in "first-357000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "first-357000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p first-357000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
minikube_profile_test.go:46: test pre-condition failed. args "out/minikube-darwin-arm64 start -p first-357000 --driver=qemu2 ": exit status 80
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-20 17:09:15.066053 -0700 PDT m=+541.282557668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p second-359000 -n second-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p second-359000 -n second-359000: exit status 85 (84.49775ms)

                                                
                                                
-- stdout --
	* Profile "second-359000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p second-359000"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "second-359000" host is not running, skipping log retrieval (state="* Profile \"second-359000\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p second-359000\"")
helpers_test.go:175: Cleaning up "second-359000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p second-359000
panic.go:626: *** TestMinikubeProfile FAILED at 2024-03-20 17:09:15.386806 -0700 PDT m=+541.603322710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p first-357000 -n first-357000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p first-357000 -n first-357000: exit status 7 (32.394583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "first-357000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "first-357000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p first-357000
--- FAIL: TestMinikubeProfile (10.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.64s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-arm64 start -p mount-start-1-435000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 
mount_start_test.go:98: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p mount-start-1-435000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 : exit status 80 (10.568271583s)

                                                
                                                
-- stdout --
	* [mount-start-1-435000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting minikube without Kubernetes in cluster mount-start-1-435000
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "mount-start-1-435000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p mount-start-1-435000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
mount_start_test.go:100: failed to start minikube with args: "out/minikube-darwin-arm64 start -p mount-start-1-435000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-435000 -n mount-start-1-435000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p mount-start-1-435000 -n mount-start-1-435000: exit status 7 (70.265125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "mount-start-1-435000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMountStart/serial/StartWithMountFirst (10.64s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (10.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-894000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-894000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (10.002481458s)

                                                
                                                
-- stdout --
	* [multinode-894000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-894000" primary control-plane node in "multinode-894000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-894000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:09:26.525313   13909 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:09:26.525452   13909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:09:26.525458   13909 out.go:304] Setting ErrFile to fd 2...
	I0320 17:09:26.525461   13909 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:09:26.525593   13909 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:09:26.526675   13909 out.go:298] Setting JSON to false
	I0320 17:09:26.542912   13909 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7736,"bootTime":1710972030,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:09:26.542966   13909 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:09:26.549894   13909 out.go:177] * [multinode-894000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:09:26.556775   13909 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:09:26.561837   13909 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:09:26.556812   13909 notify.go:220] Checking for updates...
	I0320 17:09:26.569765   13909 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:09:26.573808   13909 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:09:26.576751   13909 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:09:26.579786   13909 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:09:26.583000   13909 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:09:26.587761   13909 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:09:26.594876   13909 start.go:297] selected driver: qemu2
	I0320 17:09:26.594883   13909 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:09:26.594891   13909 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:09:26.597163   13909 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:09:26.601705   13909 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:09:26.604837   13909 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:09:26.604877   13909 cni.go:84] Creating CNI manager for ""
	I0320 17:09:26.604882   13909 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0320 17:09:26.604886   13909 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0320 17:09:26.604922   13909 start.go:340] cluster config:
	{Name:multinode-894000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:09:26.610056   13909 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:09:26.618748   13909 out.go:177] * Starting "multinode-894000" primary control-plane node in "multinode-894000" cluster
	I0320 17:09:26.622796   13909 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:09:26.622813   13909 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:09:26.622825   13909 cache.go:56] Caching tarball of preloaded images
	I0320 17:09:26.622889   13909 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:09:26.622902   13909 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:09:26.623176   13909 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/multinode-894000/config.json ...
	I0320 17:09:26.623193   13909 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/multinode-894000/config.json: {Name:mk7fc90734b1910286716d7bb83bf211248746b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:09:26.623445   13909 start.go:360] acquireMachinesLock for multinode-894000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:09:26.623481   13909 start.go:364] duration metric: took 30.208µs to acquireMachinesLock for "multinode-894000"
	I0320 17:09:26.623497   13909 start.go:93] Provisioning new machine with config: &{Name:multinode-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:multinode-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:09:26.623528   13909 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:09:26.632791   13909 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:09:26.651763   13909 start.go:159] libmachine.API.Create for "multinode-894000" (driver="qemu2")
	I0320 17:09:26.651786   13909 client.go:168] LocalClient.Create starting
	I0320 17:09:26.651852   13909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:09:26.651882   13909 main.go:141] libmachine: Decoding PEM data...
	I0320 17:09:26.651898   13909 main.go:141] libmachine: Parsing certificate...
	I0320 17:09:26.651951   13909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:09:26.651974   13909 main.go:141] libmachine: Decoding PEM data...
	I0320 17:09:26.651984   13909 main.go:141] libmachine: Parsing certificate...
	I0320 17:09:26.652426   13909 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:09:26.795517   13909 main.go:141] libmachine: Creating SSH key...
	I0320 17:09:26.939045   13909 main.go:141] libmachine: Creating Disk image...
	I0320 17:09:26.939052   13909 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:09:26.939224   13909 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2
	I0320 17:09:26.951673   13909 main.go:141] libmachine: STDOUT: 
	I0320 17:09:26.951702   13909 main.go:141] libmachine: STDERR: 
	I0320 17:09:26.951762   13909 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2 +20000M
	I0320 17:09:26.962336   13909 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:09:26.962365   13909 main.go:141] libmachine: STDERR: 
	I0320 17:09:26.962378   13909 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2
	I0320 17:09:26.962384   13909 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:09:26.962428   13909 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=22:eb:07:66:04:a6 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2
	I0320 17:09:26.964175   13909 main.go:141] libmachine: STDOUT: 
	I0320 17:09:26.964193   13909 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:09:26.964212   13909 client.go:171] duration metric: took 312.432417ms to LocalClient.Create
	I0320 17:09:28.966379   13909 start.go:128] duration metric: took 2.342918166s to createHost
	I0320 17:09:28.966463   13909 start.go:83] releasing machines lock for "multinode-894000", held for 2.34306325s
	W0320 17:09:28.966572   13909 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:09:28.977837   13909 out.go:177] * Deleting "multinode-894000" in qemu2 ...
	W0320 17:09:29.005283   13909 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:09:29.005311   13909 start.go:728] Will try again in 5 seconds ...
	I0320 17:09:34.007337   13909 start.go:360] acquireMachinesLock for multinode-894000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:09:34.007822   13909 start.go:364] duration metric: took 363.75µs to acquireMachinesLock for "multinode-894000"
	I0320 17:09:34.007962   13909 start.go:93] Provisioning new machine with config: &{Name:multinode-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.3 ClusterName:multinode-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:09:34.008222   13909 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:09:34.016934   13909 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:09:34.066381   13909 start.go:159] libmachine.API.Create for "multinode-894000" (driver="qemu2")
	I0320 17:09:34.066432   13909 client.go:168] LocalClient.Create starting
	I0320 17:09:34.066550   13909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:09:34.066613   13909 main.go:141] libmachine: Decoding PEM data...
	I0320 17:09:34.066632   13909 main.go:141] libmachine: Parsing certificate...
	I0320 17:09:34.066714   13909 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:09:34.066757   13909 main.go:141] libmachine: Decoding PEM data...
	I0320 17:09:34.066766   13909 main.go:141] libmachine: Parsing certificate...
	I0320 17:09:34.067294   13909 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:09:34.220280   13909 main.go:141] libmachine: Creating SSH key...
	I0320 17:09:34.422090   13909 main.go:141] libmachine: Creating Disk image...
	I0320 17:09:34.422101   13909 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:09:34.422350   13909 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2
	I0320 17:09:34.435058   13909 main.go:141] libmachine: STDOUT: 
	I0320 17:09:34.435083   13909 main.go:141] libmachine: STDERR: 
	I0320 17:09:34.435133   13909 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2 +20000M
	I0320 17:09:34.445831   13909 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:09:34.445847   13909 main.go:141] libmachine: STDERR: 
	I0320 17:09:34.445857   13909 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2
	I0320 17:09:34.445862   13909 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:09:34.445904   13909 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:08:36:51:4d:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2
	I0320 17:09:34.447659   13909 main.go:141] libmachine: STDOUT: 
	I0320 17:09:34.447674   13909 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:09:34.447686   13909 client.go:171] duration metric: took 381.261958ms to LocalClient.Create
	I0320 17:09:36.449786   13909 start.go:128] duration metric: took 2.441628917s to createHost
	I0320 17:09:36.449835   13909 start.go:83] releasing machines lock for "multinode-894000", held for 2.442083375s
	W0320 17:09:36.450325   13909 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-894000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-894000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:09:36.459789   13909 out.go:177] 
	W0320 17:09:36.469102   13909 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:09:36.469127   13909 out.go:239] * 
	* 
	W0320 17:09:36.471592   13909 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:09:36.482013   13909 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-894000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000: exit status 7 (68.622375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (10.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (73.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:493: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml: exit status 1 (61.563ms)

                                                
                                                
** stderr ** 
	error: cluster "multinode-894000" does not exist

                                                
                                                
** /stderr **
multinode_test.go:495: failed to create busybox deployment to multinode cluster
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- rollout status deployment/busybox: exit status 1 (58.779708ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (59.148583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.960791ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.515833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.763208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (105.544084ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.001833ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.91675ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (103.191042ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.202292ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:505: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].status.podIP}': exit status 1 (106.04225ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:508: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:524: failed to resolve pod IPs: failed to retrieve Pod IPs (may be temporary): exit status 1
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:528: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (59.329583ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:530: failed get Pod names
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- exec  -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- exec  -- nslookup kubernetes.io: exit status 1 (59.044208ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:538: Pod  could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- exec  -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- exec  -- nslookup kubernetes.default: exit status 1 (58.5535ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:548: Pod  could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- exec  -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- exec  -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (58.557291ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:556: Pod  could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000: exit status 7 (32.822625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (73.30s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:564: (dbg) Non-zero exit: out/minikube-darwin-arm64 kubectl -p multinode-894000 -- get pods -o jsonpath='{.items[*].metadata.name}': exit status 1 (58.610417ms)

                                                
                                                
** stderr ** 
	error: no server found for cluster "multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:566: failed to get Pod names: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000: exit status 7 (32.697875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (0.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-894000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-894000 -v 3 --alsologtostderr: exit status 83 (43.908041ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-894000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:10:49.988080   13999 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:10:49.988276   13999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:49.988285   13999 out.go:304] Setting ErrFile to fd 2...
	I0320 17:10:49.988287   13999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:49.988399   13999 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:10:49.988634   13999 mustload.go:65] Loading cluster: multinode-894000
	I0320 17:10:49.988814   13999 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:10:49.993433   13999 out.go:177] * The control-plane node multinode-894000 host is not running: state=Stopped
	I0320 17:10:49.996227   13999 out.go:177]   To start a cluster, run: "minikube start -p multinode-894000"

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-darwin-arm64 node add -p multinode-894000 -v 3 --alsologtostderr" : exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000: exit status 7 (32.4395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/AddNode (0.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-894000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
multinode_test.go:221: (dbg) Non-zero exit: kubectl --context multinode-894000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]": exit status 1 (26.447209ms)

                                                
                                                
** stderr ** 
	Error in configuration: context was not found for specified context: multinode-894000

                                                
                                                
** /stderr **
multinode_test.go:223: failed to 'kubectl get nodes' with args "kubectl --context multinode-894000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": exit status 1
multinode_test.go:230: failed to decode json from label list: args "kubectl --context multinode-894000 get nodes -o \"jsonpath=[{range .items[*]}{.metadata.labels},{end}]\"": unexpected end of JSON input
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000: exit status 7 (32.811041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
multinode_test.go:166: expected profile "multinode-894000" in json of 'profile list' include 3 nodes but have 1 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"multinode-894000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"multinode-894000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"qemu2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNU
MACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.29.3\",\"ClusterName\":\"multinode-894000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"\",\"Port\":8443,\"KubernetesVer
sion\":\"v1.29.3\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":null,\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"socket_vmnet\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"/opt/socket_vmnet/bin/socket_vmnet_client\",\"SocketVMnetPath\":\"/var/run/socket_vmnet\",\"StaticIP\":
\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false}]}"*. args: "out/minikube-darwin-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000: exit status 7 (32.241792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ProfileList (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 status --output json --alsologtostderr: exit status 7 (32.572625ms)

                                                
                                                
-- stdout --
	{"Name":"multinode-894000","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":false}

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:10:50.229453   14012 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:10:50.229587   14012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:50.229591   14012 out.go:304] Setting ErrFile to fd 2...
	I0320 17:10:50.229594   14012 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:50.229738   14012 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:10:50.229855   14012 out.go:298] Setting JSON to true
	I0320 17:10:50.229868   14012 mustload.go:65] Loading cluster: multinode-894000
	I0320 17:10:50.229911   14012 notify.go:220] Checking for updates...
	I0320 17:10:50.230073   14012 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:10:50.230080   14012 status.go:255] checking status of multinode-894000 ...
	I0320 17:10:50.230296   14012 status.go:330] multinode-894000 host status = "Stopped" (err=<nil>)
	I0320 17:10:50.230300   14012 status.go:343] host is not running, skipping remaining checks
	I0320 17:10:50.230302   14012 status.go:257] multinode-894000 status: &{Name:multinode-894000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:191: failed to decode json from status: args "out/minikube-darwin-arm64 -p multinode-894000 status --output json --alsologtostderr": json: cannot unmarshal object into Go value of type []cmd.Status
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000: exit status 7 (32.131ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/CopyFile (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 node stop m03
multinode_test.go:248: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 node stop m03: exit status 85 (49.791667ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_295f67d8757edd996fe5c1e7ccde72c355ccf4dc_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:250: node stop returned an error. args "out/minikube-darwin-arm64 -p multinode-894000 node stop m03": exit status 85
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 status: exit status 7 (32.213334ms)

                                                
                                                
-- stdout --
	multinode-894000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 status --alsologtostderr: exit status 7 (32.009958ms)

                                                
                                                
-- stdout --
	multinode-894000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:10:50.376401   14020 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:10:50.376560   14020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:50.376564   14020 out.go:304] Setting ErrFile to fd 2...
	I0320 17:10:50.376566   14020 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:50.376697   14020 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:10:50.376824   14020 out.go:298] Setting JSON to false
	I0320 17:10:50.376833   14020 mustload.go:65] Loading cluster: multinode-894000
	I0320 17:10:50.376899   14020 notify.go:220] Checking for updates...
	I0320 17:10:50.377064   14020 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:10:50.377071   14020 status.go:255] checking status of multinode-894000 ...
	I0320 17:10:50.377271   14020 status.go:330] multinode-894000 host status = "Stopped" (err=<nil>)
	I0320 17:10:50.377275   14020 status.go:343] host is not running, skipping remaining checks
	I0320 17:10:50.377277   14020 status.go:257] multinode-894000 status: &{Name:multinode-894000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-darwin-arm64 -p multinode-894000 status --alsologtostderr": multinode-894000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000: exit status 7 (32.460708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopNode (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (48.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 node start m03 -v=7 --alsologtostderr: exit status 85 (47.871792ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:10:50.441566   14024 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:10:50.441945   14024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:50.441949   14024 out.go:304] Setting ErrFile to fd 2...
	I0320 17:10:50.441951   14024 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:50.442123   14024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:10:50.442343   14024 mustload.go:65] Loading cluster: multinode-894000
	I0320 17:10:50.442541   14024 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:10:50.445944   14024 out.go:177] 
	W0320 17:10:50.449931   14024 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
	W0320 17:10:50.449936   14024 out.go:239] * 
	* 
	W0320 17:10:50.452004   14024 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:10:50.454905   14024 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0320 17:10:50.441566   14024 out.go:291] Setting OutFile to fd 1 ...
I0320 17:10:50.441945   14024 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:10:50.441949   14024 out.go:304] Setting ErrFile to fd 2...
I0320 17:10:50.441951   14024 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0320 17:10:50.442123   14024 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
I0320 17:10:50.442343   14024 mustload.go:65] Loading cluster: multinode-894000
I0320 17:10:50.442541   14024 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
I0320 17:10:50.445944   14024 out.go:177] 
W0320 17:10:50.449931   14024 out.go:239] X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
X Exiting due to GUEST_NODE_RETRIEVE: retrieving node: Could not find node m03
W0320 17:10:50.449936   14024 out.go:239] * 
* 
W0320 17:10:50.452004   14024 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                         │
│    * If the above advice does not help, please let us know:                                                             │
│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
│                                                                                                                         │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    * Please also attach the following file to the GitHub issue:                                                         │
│    * - /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log    │
│                                                                                                                         │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
I0320 17:10:50.454905   14024 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-darwin-arm64 -p multinode-894000 node start m03 -v=7 --alsologtostderr": exit status 85
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr: exit status 7 (32.561417ms)

                                                
                                                
-- stdout --
	multinode-894000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:10:50.489792   14026 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:10:50.489938   14026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:50.489942   14026 out.go:304] Setting ErrFile to fd 2...
	I0320 17:10:50.489944   14026 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:50.490083   14026 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:10:50.490202   14026 out.go:298] Setting JSON to false
	I0320 17:10:50.490215   14026 mustload.go:65] Loading cluster: multinode-894000
	I0320 17:10:50.490268   14026 notify.go:220] Checking for updates...
	I0320 17:10:50.490417   14026 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:10:50.490424   14026 status.go:255] checking status of multinode-894000 ...
	I0320 17:10:50.490619   14026 status.go:330] multinode-894000 host status = "Stopped" (err=<nil>)
	I0320 17:10:50.490624   14026 status.go:343] host is not running, skipping remaining checks
	I0320 17:10:50.490627   14026 status.go:257] multinode-894000 status: &{Name:multinode-894000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr: exit status 7 (78.46525ms)

                                                
                                                
-- stdout --
	multinode-894000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:10:51.517988   14028 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:10:51.518179   14028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:51.518183   14028 out.go:304] Setting ErrFile to fd 2...
	I0320 17:10:51.518186   14028 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:51.518353   14028 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:10:51.518510   14028 out.go:298] Setting JSON to false
	I0320 17:10:51.518529   14028 mustload.go:65] Loading cluster: multinode-894000
	I0320 17:10:51.518560   14028 notify.go:220] Checking for updates...
	I0320 17:10:51.518786   14028 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:10:51.518794   14028 status.go:255] checking status of multinode-894000 ...
	I0320 17:10:51.519046   14028 status.go:330] multinode-894000 host status = "Stopped" (err=<nil>)
	I0320 17:10:51.519051   14028 status.go:343] host is not running, skipping remaining checks
	I0320 17:10:51.519054   14028 status.go:257] multinode-894000 status: &{Name:multinode-894000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr: exit status 7 (75.927ms)

                                                
                                                
-- stdout --
	multinode-894000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:10:53.284129   14030 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:10:53.284329   14030 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:53.284333   14030 out.go:304] Setting ErrFile to fd 2...
	I0320 17:10:53.284336   14030 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:53.284503   14030 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:10:53.284679   14030 out.go:298] Setting JSON to false
	I0320 17:10:53.284694   14030 mustload.go:65] Loading cluster: multinode-894000
	I0320 17:10:53.284733   14030 notify.go:220] Checking for updates...
	I0320 17:10:53.284973   14030 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:10:53.284981   14030 status.go:255] checking status of multinode-894000 ...
	I0320 17:10:53.285224   14030 status.go:330] multinode-894000 host status = "Stopped" (err=<nil>)
	I0320 17:10:53.285229   14030 status.go:343] host is not running, skipping remaining checks
	I0320 17:10:53.285232   14030 status.go:257] multinode-894000 status: &{Name:multinode-894000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr: exit status 7 (76.464625ms)

                                                
                                                
-- stdout --
	multinode-894000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:10:54.969785   14032 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:10:54.969954   14032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:54.969958   14032 out.go:304] Setting ErrFile to fd 2...
	I0320 17:10:54.969961   14032 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:54.970118   14032 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:10:54.970273   14032 out.go:298] Setting JSON to false
	I0320 17:10:54.970288   14032 mustload.go:65] Loading cluster: multinode-894000
	I0320 17:10:54.970318   14032 notify.go:220] Checking for updates...
	I0320 17:10:54.970499   14032 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:10:54.970507   14032 status.go:255] checking status of multinode-894000 ...
	I0320 17:10:54.970792   14032 status.go:330] multinode-894000 host status = "Stopped" (err=<nil>)
	I0320 17:10:54.970797   14032 status.go:343] host is not running, skipping remaining checks
	I0320 17:10:54.970799   14032 status.go:257] multinode-894000 status: &{Name:multinode-894000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr: exit status 7 (75.968125ms)

                                                
                                                
-- stdout --
	multinode-894000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:10:57.219817   14034 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:10:57.220014   14034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:57.220018   14034 out.go:304] Setting ErrFile to fd 2...
	I0320 17:10:57.220020   14034 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:10:57.220174   14034 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:10:57.220330   14034 out.go:298] Setting JSON to false
	I0320 17:10:57.220347   14034 mustload.go:65] Loading cluster: multinode-894000
	I0320 17:10:57.220391   14034 notify.go:220] Checking for updates...
	I0320 17:10:57.220585   14034 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:10:57.220592   14034 status.go:255] checking status of multinode-894000 ...
	I0320 17:10:57.220859   14034 status.go:330] multinode-894000 host status = "Stopped" (err=<nil>)
	I0320 17:10:57.220864   14034 status.go:343] host is not running, skipping remaining checks
	I0320 17:10:57.220867   14034 status.go:257] multinode-894000 status: &{Name:multinode-894000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr: exit status 7 (77.824208ms)

                                                
                                                
-- stdout --
	multinode-894000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:11:01.367602   14036 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:11:01.367809   14036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:01.367814   14036 out.go:304] Setting ErrFile to fd 2...
	I0320 17:11:01.367817   14036 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:01.367998   14036 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:11:01.368190   14036 out.go:298] Setting JSON to false
	I0320 17:11:01.368206   14036 mustload.go:65] Loading cluster: multinode-894000
	I0320 17:11:01.368235   14036 notify.go:220] Checking for updates...
	I0320 17:11:01.368456   14036 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:11:01.368466   14036 status.go:255] checking status of multinode-894000 ...
	I0320 17:11:01.368725   14036 status.go:330] multinode-894000 host status = "Stopped" (err=<nil>)
	I0320 17:11:01.368730   14036 status.go:343] host is not running, skipping remaining checks
	I0320 17:11:01.368733   14036 status.go:257] multinode-894000 status: &{Name:multinode-894000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr: exit status 7 (76.163833ms)

                                                
                                                
-- stdout --
	multinode-894000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:11:05.244791   14040 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:11:05.244964   14040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:05.244968   14040 out.go:304] Setting ErrFile to fd 2...
	I0320 17:11:05.244971   14040 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:05.245137   14040 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:11:05.245321   14040 out.go:298] Setting JSON to false
	I0320 17:11:05.245338   14040 mustload.go:65] Loading cluster: multinode-894000
	I0320 17:11:05.245371   14040 notify.go:220] Checking for updates...
	I0320 17:11:05.245597   14040 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:11:05.245605   14040 status.go:255] checking status of multinode-894000 ...
	I0320 17:11:05.245863   14040 status.go:330] multinode-894000 host status = "Stopped" (err=<nil>)
	I0320 17:11:05.245868   14040 status.go:343] host is not running, skipping remaining checks
	I0320 17:11:05.245871   14040 status.go:257] multinode-894000 status: &{Name:multinode-894000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr: exit status 7 (77.003375ms)

                                                
                                                
-- stdout --
	multinode-894000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:11:18.296473   14042 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:11:18.296662   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:18.296666   14042 out.go:304] Setting ErrFile to fd 2...
	I0320 17:11:18.296669   14042 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:18.296827   14042 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:11:18.296985   14042 out.go:298] Setting JSON to false
	I0320 17:11:18.297001   14042 mustload.go:65] Loading cluster: multinode-894000
	I0320 17:11:18.297031   14042 notify.go:220] Checking for updates...
	I0320 17:11:18.297248   14042 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:11:18.297257   14042 status.go:255] checking status of multinode-894000 ...
	I0320 17:11:18.297525   14042 status.go:330] multinode-894000 host status = "Stopped" (err=<nil>)
	I0320 17:11:18.297530   14042 status.go:343] host is not running, skipping remaining checks
	I0320 17:11:18.297533   14042 status.go:257] multinode-894000 status: &{Name:multinode-894000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr: exit status 7 (75.509625ms)

                                                
                                                
-- stdout --
	multinode-894000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:11:38.400363   14044 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:11:38.400564   14044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:38.400569   14044 out.go:304] Setting ErrFile to fd 2...
	I0320 17:11:38.400571   14044 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:38.400740   14044 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:11:38.400897   14044 out.go:298] Setting JSON to false
	I0320 17:11:38.400923   14044 mustload.go:65] Loading cluster: multinode-894000
	I0320 17:11:38.400972   14044 notify.go:220] Checking for updates...
	I0320 17:11:38.401175   14044 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:11:38.401184   14044 status.go:255] checking status of multinode-894000 ...
	I0320 17:11:38.401477   14044 status.go:330] multinode-894000 host status = "Stopped" (err=<nil>)
	I0320 17:11:38.401482   14044 status.go:343] host is not running, skipping remaining checks
	I0320 17:11:38.401485   14044 status.go:257] multinode-894000 status: &{Name:multinode-894000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-894000 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000: exit status 7 (34.135541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StartAfterStop (48.03s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (8.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-894000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-arm64 stop -p multinode-894000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-arm64 stop -p multinode-894000: (2.959415708s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-894000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-894000 --wait=true -v=8 --alsologtostderr: exit status 80 (5.246310959s)

                                                
                                                
-- stdout --
	* [multinode-894000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-894000" primary control-plane node in "multinode-894000" cluster
	* Restarting existing qemu2 VM for "multinode-894000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-894000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:11:41.496612   14075 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:11:41.496778   14075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:41.496783   14075 out.go:304] Setting ErrFile to fd 2...
	I0320 17:11:41.496787   14075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:41.496978   14075 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:11:41.498418   14075 out.go:298] Setting JSON to false
	I0320 17:11:41.518650   14075 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7871,"bootTime":1710972030,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:11:41.518720   14075 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:11:41.523337   14075 out.go:177] * [multinode-894000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:11:41.530271   14075 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:11:41.530301   14075 notify.go:220] Checking for updates...
	I0320 17:11:41.537326   14075 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:11:41.544298   14075 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:11:41.551272   14075 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:11:41.558245   14075 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:11:41.564310   14075 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:11:41.567596   14075 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:11:41.567655   14075 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:11:41.572305   14075 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:11:41.579279   14075 start.go:297] selected driver: qemu2
	I0320 17:11:41.579286   14075 start.go:901] validating driver "qemu2" against &{Name:multinode-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:multinode-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:11:41.579341   14075 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:11:41.581922   14075 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:11:41.581971   14075 cni.go:84] Creating CNI manager for ""
	I0320 17:11:41.581978   14075 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0320 17:11:41.582036   14075 start.go:340] cluster config:
	{Name:multinode-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-894000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:11:41.587070   14075 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:11:41.595262   14075 out.go:177] * Starting "multinode-894000" primary control-plane node in "multinode-894000" cluster
	I0320 17:11:41.599301   14075 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:11:41.599331   14075 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:11:41.599341   14075 cache.go:56] Caching tarball of preloaded images
	I0320 17:11:41.599421   14075 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:11:41.599428   14075 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:11:41.599501   14075 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/multinode-894000/config.json ...
	I0320 17:11:41.600135   14075 start.go:360] acquireMachinesLock for multinode-894000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:11:41.600184   14075 start.go:364] duration metric: took 41.125µs to acquireMachinesLock for "multinode-894000"
	I0320 17:11:41.600197   14075 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:11:41.600207   14075 fix.go:54] fixHost starting: 
	I0320 17:11:41.600362   14075 fix.go:112] recreateIfNeeded on multinode-894000: state=Stopped err=<nil>
	W0320 17:11:41.600373   14075 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:11:41.608312   14075 out.go:177] * Restarting existing qemu2 VM for "multinode-894000" ...
	I0320 17:11:41.612424   14075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:08:36:51:4d:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2
	I0320 17:11:41.615347   14075 main.go:141] libmachine: STDOUT: 
	I0320 17:11:41.615376   14075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:11:41.615418   14075 fix.go:56] duration metric: took 15.212292ms for fixHost
	I0320 17:11:41.615426   14075 start.go:83] releasing machines lock for "multinode-894000", held for 15.235875ms
	W0320 17:11:41.615436   14075 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:11:41.615486   14075 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:11:41.615493   14075 start.go:728] Will try again in 5 seconds ...
	I0320 17:11:46.615846   14075 start.go:360] acquireMachinesLock for multinode-894000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:11:46.616211   14075 start.go:364] duration metric: took 264.5µs to acquireMachinesLock for "multinode-894000"
	I0320 17:11:46.616315   14075 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:11:46.616333   14075 fix.go:54] fixHost starting: 
	I0320 17:11:46.616965   14075 fix.go:112] recreateIfNeeded on multinode-894000: state=Stopped err=<nil>
	W0320 17:11:46.616990   14075 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:11:46.622326   14075 out.go:177] * Restarting existing qemu2 VM for "multinode-894000" ...
	I0320 17:11:46.625558   14075 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:08:36:51:4d:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2
	I0320 17:11:46.635725   14075 main.go:141] libmachine: STDOUT: 
	I0320 17:11:46.635794   14075 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:11:46.635861   14075 fix.go:56] duration metric: took 19.53ms for fixHost
	I0320 17:11:46.635882   14075 start.go:83] releasing machines lock for "multinode-894000", held for 19.645209ms
	W0320 17:11:46.636036   14075 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-894000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-894000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:11:46.643324   14075 out.go:177] 
	W0320 17:11:46.647310   14075 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:11:46.647386   14075 out.go:239] * 
	* 
	W0320 17:11:46.649776   14075 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:11:46.660333   14075 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-darwin-arm64 node list -p multinode-894000" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-894000
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000: exit status 7 (34.330917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (8.35s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 node delete m03
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 node delete m03: exit status 83 (44.077625ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-894000"

                                                
                                                
-- /stdout --
multinode_test.go:418: node delete returned an error. args "out/minikube-darwin-arm64 -p multinode-894000 node delete m03": exit status 83
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 status --alsologtostderr: exit status 7 (32.2895ms)

                                                
                                                
-- stdout --
	multinode-894000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:11:46.856993   14089 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:11:46.857142   14089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:46.857145   14089 out.go:304] Setting ErrFile to fd 2...
	I0320 17:11:46.857147   14089 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:46.857261   14089 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:11:46.857387   14089 out.go:298] Setting JSON to false
	I0320 17:11:46.857399   14089 mustload.go:65] Loading cluster: multinode-894000
	I0320 17:11:46.857461   14089 notify.go:220] Checking for updates...
	I0320 17:11:46.857602   14089 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:11:46.857609   14089 status.go:255] checking status of multinode-894000 ...
	I0320 17:11:46.857816   14089 status.go:330] multinode-894000 host status = "Stopped" (err=<nil>)
	I0320 17:11:46.857819   14089 status.go:343] host is not running, skipping remaining checks
	I0320 17:11:46.857821   14089 status.go:257] multinode-894000 status: &{Name:multinode-894000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-darwin-arm64 -p multinode-894000 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000: exit status 7 (32.139958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/DeleteNode (0.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (3.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-arm64 -p multinode-894000 stop: (3.199074875s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 status: exit status 7 (69.588208ms)

                                                
                                                
-- stdout --
	multinode-894000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-arm64 -p multinode-894000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p multinode-894000 status --alsologtostderr: exit status 7 (34.322875ms)

                                                
                                                
-- stdout --
	multinode-894000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:11:50.192701   14113 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:11:50.192830   14113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:50.192833   14113 out.go:304] Setting ErrFile to fd 2...
	I0320 17:11:50.192835   14113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:50.192958   14113 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:11:50.193083   14113 out.go:298] Setting JSON to false
	I0320 17:11:50.193094   14113 mustload.go:65] Loading cluster: multinode-894000
	I0320 17:11:50.193144   14113 notify.go:220] Checking for updates...
	I0320 17:11:50.193293   14113 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:11:50.193300   14113 status.go:255] checking status of multinode-894000 ...
	I0320 17:11:50.193502   14113 status.go:330] multinode-894000 host status = "Stopped" (err=<nil>)
	I0320 17:11:50.193505   14113 status.go:343] host is not running, skipping remaining checks
	I0320 17:11:50.193507   14113 status.go:257] multinode-894000 status: &{Name:multinode-894000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-darwin-arm64 -p multinode-894000 status --alsologtostderr": multinode-894000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-darwin-arm64 -p multinode-894000 status --alsologtostderr": multinode-894000
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000: exit status 7 (32.584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (3.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-894000 --wait=true -v=8 --alsologtostderr --driver=qemu2 
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-894000 --wait=true -v=8 --alsologtostderr --driver=qemu2 : exit status 80 (5.182921541s)

                                                
                                                
-- stdout --
	* [multinode-894000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "multinode-894000" primary control-plane node in "multinode-894000" cluster
	* Restarting existing qemu2 VM for "multinode-894000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "multinode-894000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:11:50.256328   14117 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:11:50.256448   14117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:50.256451   14117 out.go:304] Setting ErrFile to fd 2...
	I0320 17:11:50.256461   14117 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:11:50.256593   14117 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:11:50.257583   14117 out.go:298] Setting JSON to false
	I0320 17:11:50.273752   14117 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7880,"bootTime":1710972030,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:11:50.273812   14117 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:11:50.278154   14117 out.go:177] * [multinode-894000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:11:50.285983   14117 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:11:50.287439   14117 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:11:50.286027   14117 notify.go:220] Checking for updates...
	I0320 17:11:50.294945   14117 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:11:50.297918   14117 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:11:50.300965   14117 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:11:50.303911   14117 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:11:50.305713   14117 config.go:182] Loaded profile config "multinode-894000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:11:50.305982   14117 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:11:50.309911   14117 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:11:50.316758   14117 start.go:297] selected driver: qemu2
	I0320 17:11:50.316763   14117 start.go:901] validating driver "qemu2" against &{Name:multinode-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:multinode-894000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:11:50.316812   14117 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:11:50.319032   14117 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:11:50.319061   14117 cni.go:84] Creating CNI manager for ""
	I0320 17:11:50.319065   14117 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0320 17:11:50.319114   14117 start.go:340] cluster config:
	{Name:multinode-894000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:multinode-894000 Namespace:default APIServerH
AVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:11:50.323497   14117 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:11:50.331003   14117 out.go:177] * Starting "multinode-894000" primary control-plane node in "multinode-894000" cluster
	I0320 17:11:50.334935   14117 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:11:50.334952   14117 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:11:50.334963   14117 cache.go:56] Caching tarball of preloaded images
	I0320 17:11:50.335018   14117 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:11:50.335027   14117 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:11:50.335098   14117 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/multinode-894000/config.json ...
	I0320 17:11:50.335603   14117 start.go:360] acquireMachinesLock for multinode-894000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:11:50.335631   14117 start.go:364] duration metric: took 21.625µs to acquireMachinesLock for "multinode-894000"
	I0320 17:11:50.335641   14117 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:11:50.335651   14117 fix.go:54] fixHost starting: 
	I0320 17:11:50.335787   14117 fix.go:112] recreateIfNeeded on multinode-894000: state=Stopped err=<nil>
	W0320 17:11:50.335797   14117 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:11:50.342922   14117 out.go:177] * Restarting existing qemu2 VM for "multinode-894000" ...
	I0320 17:11:50.346942   14117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:08:36:51:4d:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2
	I0320 17:11:50.348974   14117 main.go:141] libmachine: STDOUT: 
	I0320 17:11:50.349007   14117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:11:50.349042   14117 fix.go:56] duration metric: took 13.395625ms for fixHost
	I0320 17:11:50.349046   14117 start.go:83] releasing machines lock for "multinode-894000", held for 13.411375ms
	W0320 17:11:50.349054   14117 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:11:50.349088   14117 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:11:50.349093   14117 start.go:728] Will try again in 5 seconds ...
	I0320 17:11:55.351081   14117 start.go:360] acquireMachinesLock for multinode-894000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:11:55.351434   14117 start.go:364] duration metric: took 272.542µs to acquireMachinesLock for "multinode-894000"
	I0320 17:11:55.351574   14117 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:11:55.351592   14117 fix.go:54] fixHost starting: 
	I0320 17:11:55.352260   14117 fix.go:112] recreateIfNeeded on multinode-894000: state=Stopped err=<nil>
	W0320 17:11:55.352285   14117 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:11:55.357717   14117 out.go:177] * Restarting existing qemu2 VM for "multinode-894000" ...
	I0320 17:11:55.363925   14117 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/qemu.pid -device virtio-net-pci,netdev=net0,mac=92:08:36:51:4d:31 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/multinode-894000/disk.qcow2
	I0320 17:11:55.373591   14117 main.go:141] libmachine: STDOUT: 
	I0320 17:11:55.373652   14117 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:11:55.373708   14117 fix.go:56] duration metric: took 22.117542ms for fixHost
	I0320 17:11:55.373725   14117 start.go:83] releasing machines lock for "multinode-894000", held for 22.270875ms
	W0320 17:11:55.373909   14117 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p multinode-894000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-894000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:11:55.380609   14117 out.go:177] 
	W0320 17:11:55.384703   14117 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:11:55.384742   14117 out.go:239] * 
	* 
	W0320 17:11:55.387766   14117 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:11:55.395680   14117 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-arm64 start -p multinode-894000 --wait=true -v=8 --alsologtostderr --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000: exit status 7 (69.106458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/RestartMultiNode (5.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (20.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-arm64 node list -p multinode-894000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-894000-m01 --driver=qemu2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-894000-m01 --driver=qemu2 : exit status 80 (10.182160791s)

                                                
                                                
-- stdout --
	* [multinode-894000-m01] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-894000-m01" primary control-plane node in "multinode-894000-m01" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-894000-m01" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-894000-m01" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-arm64 start -p multinode-894000-m02 --driver=qemu2 
multinode_test.go:472: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p multinode-894000-m02 --driver=qemu2 : exit status 80 (10.193523833s)

                                                
                                                
-- stdout --
	* [multinode-894000-m02] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "multinode-894000-m02" primary control-plane node in "multinode-894000-m02" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "multinode-894000-m02" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p multinode-894000-m02" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:474: failed to start profile. args "out/minikube-darwin-arm64 start -p multinode-894000-m02 --driver=qemu2 " : exit status 80
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-arm64 node add -p multinode-894000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-arm64 node add -p multinode-894000: exit status 83 (81.477792ms)

                                                
                                                
-- stdout --
	* The control-plane node multinode-894000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p multinode-894000"

                                                
                                                
-- /stdout --
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-arm64 delete -p multinode-894000-m02
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p multinode-894000 -n multinode-894000: exit status 7 (32.314292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-894000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/ValidateNameConflict (20.63s)

                                                
                                    
x
+
TestPreload (10.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-arm64 start -p test-preload-756000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p test-preload-756000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4: exit status 80 (9.944911708s)

                                                
                                                
-- stdout --
	* [test-preload-756000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "test-preload-756000" primary control-plane node in "test-preload-756000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "test-preload-756000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:12:16.293063   14172 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:12:16.293216   14172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:12:16.293219   14172 out.go:304] Setting ErrFile to fd 2...
	I0320 17:12:16.293221   14172 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:12:16.293349   14172 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:12:16.294453   14172 out.go:298] Setting JSON to false
	I0320 17:12:16.310876   14172 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7906,"bootTime":1710972030,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:12:16.310935   14172 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:12:16.316830   14172 out.go:177] * [test-preload-756000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:12:16.323709   14172 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:12:16.328756   14172 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:12:16.323750   14172 notify.go:220] Checking for updates...
	I0320 17:12:16.336670   14172 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:12:16.340700   14172 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:12:16.343713   14172 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:12:16.346688   14172 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:12:16.350067   14172 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:12:16.350124   14172 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:12:16.354647   14172 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:12:16.361716   14172 start.go:297] selected driver: qemu2
	I0320 17:12:16.361723   14172 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:12:16.361730   14172 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:12:16.364002   14172 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:12:16.368620   14172 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:12:16.371751   14172 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:12:16.371796   14172 cni.go:84] Creating CNI manager for ""
	I0320 17:12:16.371804   14172 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:12:16.371808   14172 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0320 17:12:16.371843   14172 start.go:340] cluster config:
	{Name:test-preload-756000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/so
cket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:12:16.376659   14172 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:12:16.384694   14172 out.go:177] * Starting "test-preload-756000" primary control-plane node in "test-preload-756000" cluster
	I0320 17:12:16.388672   14172 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime docker
	I0320 17:12:16.388742   14172 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/test-preload-756000/config.json ...
	I0320 17:12:16.388756   14172 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/test-preload-756000/config.json: {Name:mk1143d4906ff4a65c70504a3aa7dcf2950ca945 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:12:16.388777   14172 cache.go:107] acquiring lock: {Name:mkc2be43114db7e9e585d9b5a82ee9b44e9e2cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:12:16.388800   14172 cache.go:107] acquiring lock: {Name:mk44cf3c18a8903d9950c354dfe8f2f5bc1df17e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:12:16.388806   14172 cache.go:107] acquiring lock: {Name:mkf4ad4a2217925b2125f4af70f2f27d6e637d93 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:12:16.388840   14172 cache.go:107] acquiring lock: {Name:mk913a718ab56cfed9eef59597f7f88602113961 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:12:16.389014   14172 cache.go:107] acquiring lock: {Name:mkeeac54b7eddad24c7859c7cec2c3f6c271689f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:12:16.389021   14172 cache.go:107] acquiring lock: {Name:mk9d780ec8f8cbf9fa3681ca6df500f9e867b470 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:12:16.389046   14172 cache.go:107] acquiring lock: {Name:mkef9697c34e5734fcb4c4052f5881f80d0e954b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:12:16.389068   14172 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:12:16.389031   14172 cache.go:107] acquiring lock: {Name:mk4767077ba2f8ec7dbd8c225cdadbdd524ebc5f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:12:16.389055   14172 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0320 17:12:16.389063   14172 start.go:360] acquireMachinesLock for test-preload-756000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:12:16.389058   14172 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0320 17:12:16.389354   14172 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0320 17:12:16.389360   14172 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0320 17:12:16.389368   14172 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:12:16.389380   14172 start.go:364] duration metric: took 260.958µs to acquireMachinesLock for "test-preload-756000"
	I0320 17:12:16.389385   14172 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0320 17:12:16.389441   14172 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0320 17:12:16.389399   14172 start.go:93] Provisioning new machine with config: &{Name:test-preload-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:12:16.389471   14172 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:12:16.397657   14172 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:12:16.401943   14172 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0320 17:12:16.404131   14172 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0320 17:12:16.404670   14172 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:12:16.409138   14172 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0320 17:12:16.409157   14172 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0320 17:12:16.409381   14172 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:12:16.409575   14172 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0320 17:12:16.409663   14172 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0320 17:12:16.416636   14172 start.go:159] libmachine.API.Create for "test-preload-756000" (driver="qemu2")
	I0320 17:12:16.416657   14172 client.go:168] LocalClient.Create starting
	I0320 17:12:16.416736   14172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:12:16.416765   14172 main.go:141] libmachine: Decoding PEM data...
	I0320 17:12:16.416775   14172 main.go:141] libmachine: Parsing certificate...
	I0320 17:12:16.416826   14172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:12:16.416850   14172 main.go:141] libmachine: Decoding PEM data...
	I0320 17:12:16.416858   14172 main.go:141] libmachine: Parsing certificate...
	I0320 17:12:16.417199   14172 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:12:16.567590   14172 main.go:141] libmachine: Creating SSH key...
	I0320 17:12:16.695775   14172 main.go:141] libmachine: Creating Disk image...
	I0320 17:12:16.695798   14172 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:12:16.696025   14172 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/disk.qcow2
	I0320 17:12:16.709314   14172 main.go:141] libmachine: STDOUT: 
	I0320 17:12:16.709336   14172 main.go:141] libmachine: STDERR: 
	I0320 17:12:16.709390   14172 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/disk.qcow2 +20000M
	I0320 17:12:16.721138   14172 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:12:16.721157   14172 main.go:141] libmachine: STDERR: 
	I0320 17:12:16.721184   14172 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/disk.qcow2
	I0320 17:12:16.721188   14172 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:12:16.721217   14172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=2e:05:d6:6e:eb:fa -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/disk.qcow2
	I0320 17:12:16.723147   14172 main.go:141] libmachine: STDOUT: 
	I0320 17:12:16.723178   14172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:12:16.723195   14172 client.go:171] duration metric: took 306.543916ms to LocalClient.Create
	I0320 17:12:18.373179   14172 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4
	I0320 17:12:18.512719   14172 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4
	W0320 17:12:18.513340   14172 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0320 17:12:18.513466   14172 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0320 17:12:18.514824   14172 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0320 17:12:18.525669   14172 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4
	I0320 17:12:18.543126   14172 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0320 17:12:18.546974   14172 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0320 17:12:18.643615   14172 cache.go:157] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 exists
	I0320 17:12:18.643664   14172 cache.go:96] cache image "registry.k8s.io/pause:3.7" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7" took 2.254957959s
	I0320 17:12:18.643710   14172 cache.go:80] save to tar file registry.k8s.io/pause:3.7 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 succeeded
	I0320 17:12:18.723402   14172 start.go:128] duration metric: took 2.334000667s to createHost
	I0320 17:12:18.723453   14172 start.go:83] releasing machines lock for "test-preload-756000", held for 2.334154125s
	W0320 17:12:18.723514   14172 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:12:18.740588   14172 out.go:177] * Deleting "test-preload-756000" in qemu2 ...
	W0320 17:12:18.771741   14172 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:12:18.771775   14172 start.go:728] Will try again in 5 seconds ...
	W0320 17:12:18.860498   14172 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0320 17:12:18.860586   14172 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0320 17:12:20.392687   14172 cache.go:157] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 exists
	I0320 17:12:20.392716   14172 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.8.6" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6" took 4.003893584s
	I0320 17:12:20.392730   14172 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.8.6 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 succeeded
	I0320 17:12:20.724710   14172 cache.go:157] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 exists
	I0320 17:12:20.724786   14172 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.24.4" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4" took 4.335902375s
	I0320 17:12:20.724819   14172 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.24.4 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.4 succeeded
	I0320 17:12:20.761317   14172 cache.go:157] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0320 17:12:20.761354   14172 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 4.372747084s
	I0320 17:12:20.761375   14172 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0320 17:12:21.767604   14172 cache.go:157] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 exists
	I0320 17:12:21.767652   14172 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.24.4" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4" took 5.379017708s
	I0320 17:12:21.767676   14172 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.24.4 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.4 succeeded
	I0320 17:12:22.468195   14172 cache.go:157] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 exists
	I0320 17:12:22.468247   14172 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.24.4" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4" took 6.079497458s
	I0320 17:12:22.468269   14172 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.24.4 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.4 succeeded
	I0320 17:12:22.846361   14172 cache.go:157] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 exists
	I0320 17:12:22.846409   14172 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.24.4" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4" took 6.457878875s
	I0320 17:12:22.846433   14172 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.24.4 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.4 succeeded
	I0320 17:12:23.772001   14172 start.go:360] acquireMachinesLock for test-preload-756000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:12:23.772386   14172 start.go:364] duration metric: took 302.916µs to acquireMachinesLock for "test-preload-756000"
	I0320 17:12:23.772501   14172 start.go:93] Provisioning new machine with config: &{Name:test-preload-756000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.24.4 ClusterName:test-preload-756000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOp
tions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:12:23.772726   14172 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:12:23.784360   14172 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:12:23.836631   14172 start.go:159] libmachine.API.Create for "test-preload-756000" (driver="qemu2")
	I0320 17:12:23.836681   14172 client.go:168] LocalClient.Create starting
	I0320 17:12:23.836819   14172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:12:23.836887   14172 main.go:141] libmachine: Decoding PEM data...
	I0320 17:12:23.836909   14172 main.go:141] libmachine: Parsing certificate...
	I0320 17:12:23.837007   14172 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:12:23.837059   14172 main.go:141] libmachine: Decoding PEM data...
	I0320 17:12:23.837077   14172 main.go:141] libmachine: Parsing certificate...
	I0320 17:12:23.837653   14172 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:12:23.991355   14172 main.go:141] libmachine: Creating SSH key...
	I0320 17:12:24.134020   14172 main.go:141] libmachine: Creating Disk image...
	I0320 17:12:24.134027   14172 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:12:24.134218   14172 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/disk.qcow2
	I0320 17:12:24.147006   14172 main.go:141] libmachine: STDOUT: 
	I0320 17:12:24.147026   14172 main.go:141] libmachine: STDERR: 
	I0320 17:12:24.147081   14172 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/disk.qcow2 +20000M
	I0320 17:12:24.158155   14172 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:12:24.158174   14172 main.go:141] libmachine: STDERR: 
	I0320 17:12:24.158191   14172 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/disk.qcow2
	I0320 17:12:24.158195   14172 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:12:24.158242   14172 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/qemu.pid -device virtio-net-pci,netdev=net0,mac=66:cb:58:f0:8c:2c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/test-preload-756000/disk.qcow2
	I0320 17:12:24.160105   14172 main.go:141] libmachine: STDOUT: 
	I0320 17:12:24.160119   14172 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:12:24.160134   14172 client.go:171] duration metric: took 323.459791ms to LocalClient.Create
	I0320 17:12:25.781412   14172 cache.go:157] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 exists
	I0320 17:12:25.781478   14172 cache.go:96] cache image "registry.k8s.io/etcd:3.5.3-0" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0" took 9.392893917s
	I0320 17:12:25.781510   14172 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.3-0 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 succeeded
	I0320 17:12:25.781568   14172 cache.go:87] Successfully saved all images to host disk.
	I0320 17:12:26.161219   14172 start.go:128] duration metric: took 2.388559583s to createHost
	I0320 17:12:26.161271   14172 start.go:83] releasing machines lock for "test-preload-756000", held for 2.388954958s
	W0320 17:12:26.161623   14172 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p test-preload-756000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p test-preload-756000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:12:26.171081   14172 out.go:177] 
	W0320 17:12:26.179104   14172 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:12:26.179134   14172 out.go:239] * 
	* 
	W0320 17:12:26.182032   14172 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:12:26.192053   14172 out.go:177] 

                                                
                                                
** /stderr **
preload_test.go:46: out/minikube-darwin-arm64 start -p test-preload-756000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.24.4 failed: exit status 80
panic.go:626: *** TestPreload FAILED at 2024-03-20 17:12:26.210378 -0700 PDT m=+732.434336710
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-756000 -n test-preload-756000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p test-preload-756000 -n test-preload-756000: exit status 7 (73.181042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "test-preload-756000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "test-preload-756000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p test-preload-756000
--- FAIL: TestPreload (10.13s)

                                                
                                    
x
+
TestScheduledStopUnix (10.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-arm64 start -p scheduled-stop-868000 --memory=2048 --driver=qemu2 
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p scheduled-stop-868000 --memory=2048 --driver=qemu2 : exit status 80 (9.86395575s)

                                                
                                                
-- stdout --
	* [scheduled-stop-868000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-868000" primary control-plane node in "scheduled-stop-868000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-868000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-868000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [scheduled-stop-868000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "scheduled-stop-868000" primary control-plane node in "scheduled-stop-868000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "scheduled-stop-868000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p scheduled-stop-868000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-03-20 17:12:36.254237 -0700 PDT m=+742.478587210
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-868000 -n scheduled-stop-868000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p scheduled-stop-868000 -n scheduled-stop-868000: exit status 7 (68.940166ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "scheduled-stop-868000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "scheduled-stop-868000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p scheduled-stop-868000
--- FAIL: TestScheduledStopUnix (10.04s)

                                                
                                    
x
+
TestSkaffold (17.41s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2954212753 version
skaffold_test.go:59: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/skaffold.exe2954212753 version: (1.0482185s)
skaffold_test.go:63: skaffold version: v2.10.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-arm64 start -p skaffold-359000 --memory=2600 --driver=qemu2 
skaffold_test.go:66: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p skaffold-359000 --memory=2600 --driver=qemu2 : exit status 80 (9.929122916s)

                                                
                                                
-- stdout --
	* [skaffold-359000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-359000" primary control-plane node in "skaffold-359000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-359000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
skaffold_test.go:68: starting minikube: exit status 80

                                                
                                                
-- stdout --
	* [skaffold-359000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "skaffold-359000" primary control-plane node in "skaffold-359000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "skaffold-359000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2600MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p skaffold-359000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestSkaffold FAILED at 2024-03-20 17:12:53.662577 -0700 PDT m=+759.887605876
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-359000 -n skaffold-359000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p skaffold-359000 -n skaffold-359000: exit status 7 (64.732542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "skaffold-359000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "skaffold-359000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p skaffold-359000
--- FAIL: TestSkaffold (17.41s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (661.56s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3994314301 start -p running-upgrade-120000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.3994314301 start -p running-upgrade-120000 --memory=2200 --vm-driver=qemu2 : (1m32.195519666s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-arm64 start -p running-upgrade-120000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:130: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p running-upgrade-120000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m51.333754375s)

                                                
                                                
-- stdout --
	* [running-upgrade-120000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "running-upgrade-120000" primary control-plane node in "running-upgrade-120000" cluster
	* Updating the running qemu2 "running-upgrade-120000" VM ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:14:52.222384   14536 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:14:52.222524   14536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:14:52.222530   14536 out.go:304] Setting ErrFile to fd 2...
	I0320 17:14:52.222533   14536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:14:52.222666   14536 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:14:52.223804   14536 out.go:298] Setting JSON to false
	I0320 17:14:52.241155   14536 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8062,"bootTime":1710972030,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:14:52.241216   14536 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:14:52.245617   14536 out.go:177] * [running-upgrade-120000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:14:52.252680   14536 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:14:52.256619   14536 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:14:52.252728   14536 notify.go:220] Checking for updates...
	I0320 17:14:52.264724   14536 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:14:52.268605   14536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:14:52.271734   14536 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:14:52.274741   14536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:14:52.278006   14536 config.go:182] Loaded profile config "running-upgrade-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0320 17:14:52.281686   14536 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0320 17:14:52.284731   14536 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:14:52.287661   14536 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:14:52.294679   14536 start.go:297] selected driver: qemu2
	I0320 17:14:52.294684   14536 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52377 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0320 17:14:52.294730   14536 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:14:52.296936   14536 cni.go:84] Creating CNI manager for ""
	I0320 17:14:52.296952   14536 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:14:52.296976   14536 start.go:340] cluster config:
	{Name:running-upgrade-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52377 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0320 17:14:52.297025   14536 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:14:52.305713   14536 out.go:177] * Starting "running-upgrade-120000" primary control-plane node in "running-upgrade-120000" cluster
	I0320 17:14:52.309679   14536 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0320 17:14:52.309693   14536 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0320 17:14:52.309697   14536 cache.go:56] Caching tarball of preloaded images
	I0320 17:14:52.309735   14536 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:14:52.309739   14536 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0320 17:14:52.309790   14536 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/config.json ...
	I0320 17:14:52.310124   14536 start.go:360] acquireMachinesLock for running-upgrade-120000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:15:03.655745   14536 start.go:364] duration metric: took 11.346047209s to acquireMachinesLock for "running-upgrade-120000"
	I0320 17:15:03.655775   14536 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:15:03.655781   14536 fix.go:54] fixHost starting: 
	I0320 17:15:03.656732   14536 fix.go:112] recreateIfNeeded on running-upgrade-120000: state=Running err=<nil>
	W0320 17:15:03.656740   14536 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:15:03.661226   14536 out.go:177] * Updating the running qemu2 "running-upgrade-120000" VM ...
	I0320 17:15:03.669197   14536 machine.go:94] provisionDockerMachine start ...
	I0320 17:15:03.669242   14536 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:03.669352   14536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f8dbf0] 0x102f90450 <nil>  [] 0s} localhost 52309 <nil> <nil>}
	I0320 17:15:03.669357   14536 main.go:141] libmachine: About to run SSH command:
	hostname
	I0320 17:15:03.725734   14536 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-120000
	
	I0320 17:15:03.725750   14536 buildroot.go:166] provisioning hostname "running-upgrade-120000"
	I0320 17:15:03.725801   14536 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:03.725950   14536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f8dbf0] 0x102f90450 <nil>  [] 0s} localhost 52309 <nil> <nil>}
	I0320 17:15:03.725956   14536 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-120000 && echo "running-upgrade-120000" | sudo tee /etc/hostname
	I0320 17:15:03.784055   14536 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-120000
	
	I0320 17:15:03.784097   14536 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:03.784213   14536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f8dbf0] 0x102f90450 <nil>  [] 0s} localhost 52309 <nil> <nil>}
	I0320 17:15:03.784222   14536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-120000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-120000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-120000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0320 17:15:03.848951   14536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0320 17:15:03.848964   14536 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17713-12248/.minikube CaCertPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17713-12248/.minikube}
	I0320 17:15:03.848972   14536 buildroot.go:174] setting up certificates
	I0320 17:15:03.848977   14536 provision.go:84] configureAuth start
	I0320 17:15:03.848986   14536 provision.go:143] copyHostCerts
	I0320 17:15:03.849063   14536 exec_runner.go:144] found /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.pem, removing ...
	I0320 17:15:03.849071   14536 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.pem
	I0320 17:15:03.849183   14536 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.pem (1082 bytes)
	I0320 17:15:03.849377   14536 exec_runner.go:144] found /Users/jenkins/minikube-integration/17713-12248/.minikube/cert.pem, removing ...
	I0320 17:15:03.849381   14536 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17713-12248/.minikube/cert.pem
	I0320 17:15:03.849430   14536 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17713-12248/.minikube/cert.pem (1123 bytes)
	I0320 17:15:03.849537   14536 exec_runner.go:144] found /Users/jenkins/minikube-integration/17713-12248/.minikube/key.pem, removing ...
	I0320 17:15:03.849540   14536 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17713-12248/.minikube/key.pem
	I0320 17:15:03.849576   14536 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17713-12248/.minikube/key.pem (1679 bytes)
	I0320 17:15:03.849670   14536 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-120000 san=[127.0.0.1 localhost minikube running-upgrade-120000]
	I0320 17:15:04.006968   14536 provision.go:177] copyRemoteCerts
	I0320 17:15:04.007016   14536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0320 17:15:04.007025   14536 sshutil.go:53] new ssh client: &{IP:localhost Port:52309 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/running-upgrade-120000/id_rsa Username:docker}
	I0320 17:15:04.038085   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0320 17:15:04.044415   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0320 17:15:04.052048   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0320 17:15:04.060029   14536 provision.go:87] duration metric: took 211.053208ms to configureAuth
	I0320 17:15:04.060042   14536 buildroot.go:189] setting minikube options for container-runtime
	I0320 17:15:04.060145   14536 config.go:182] Loaded profile config "running-upgrade-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0320 17:15:04.060188   14536 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:04.060279   14536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f8dbf0] 0x102f90450 <nil>  [] 0s} localhost 52309 <nil> <nil>}
	I0320 17:15:04.060284   14536 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0320 17:15:04.117885   14536 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0320 17:15:04.117897   14536 buildroot.go:70] root file system type: tmpfs
	I0320 17:15:04.117948   14536 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0320 17:15:04.118004   14536 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:04.118118   14536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f8dbf0] 0x102f90450 <nil>  [] 0s} localhost 52309 <nil> <nil>}
	I0320 17:15:04.118150   14536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0320 17:15:04.178915   14536 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0320 17:15:04.178969   14536 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:04.179891   14536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f8dbf0] 0x102f90450 <nil>  [] 0s} localhost 52309 <nil> <nil>}
	I0320 17:15:04.179905   14536 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0320 17:15:04.240108   14536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0320 17:15:04.240120   14536 machine.go:97] duration metric: took 570.938875ms to provisionDockerMachine
	I0320 17:15:04.240126   14536 start.go:293] postStartSetup for "running-upgrade-120000" (driver="qemu2")
	I0320 17:15:04.240132   14536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0320 17:15:04.240189   14536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0320 17:15:04.240198   14536 sshutil.go:53] new ssh client: &{IP:localhost Port:52309 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/running-upgrade-120000/id_rsa Username:docker}
	I0320 17:15:04.275900   14536 ssh_runner.go:195] Run: cat /etc/os-release
	I0320 17:15:04.277415   14536 info.go:137] Remote host: Buildroot 2021.02.12
	I0320 17:15:04.277422   14536 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17713-12248/.minikube/addons for local assets ...
	I0320 17:15:04.277490   14536 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17713-12248/.minikube/files for local assets ...
	I0320 17:15:04.277581   14536 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem -> 126652.pem in /etc/ssl/certs
	I0320 17:15:04.277670   14536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0320 17:15:04.280349   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem --> /etc/ssl/certs/126652.pem (1708 bytes)
	I0320 17:15:04.286757   14536 start.go:296] duration metric: took 46.628792ms for postStartSetup
	I0320 17:15:04.286771   14536 fix.go:56] duration metric: took 631.01725ms for fixHost
	I0320 17:15:04.286807   14536 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:04.286903   14536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f8dbf0] 0x102f90450 <nil>  [] 0s} localhost 52309 <nil> <nil>}
	I0320 17:15:04.286907   14536 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0320 17:15:04.341548   14536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710980104.436124545
	
	I0320 17:15:04.341555   14536 fix.go:216] guest clock: 1710980104.436124545
	I0320 17:15:04.341559   14536 fix.go:229] Guest: 2024-03-20 17:15:04.436124545 -0700 PDT Remote: 2024-03-20 17:15:04.286775 -0700 PDT m=+12.088271293 (delta=149.349545ms)
	I0320 17:15:04.341570   14536 fix.go:200] guest clock delta is within tolerance: 149.349545ms
	I0320 17:15:04.341573   14536 start.go:83] releasing machines lock for "running-upgrade-120000", held for 685.844917ms
	I0320 17:15:04.341629   14536 ssh_runner.go:195] Run: cat /version.json
	I0320 17:15:04.341640   14536 sshutil.go:53] new ssh client: &{IP:localhost Port:52309 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/running-upgrade-120000/id_rsa Username:docker}
	I0320 17:15:04.341629   14536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0320 17:15:04.341713   14536 sshutil.go:53] new ssh client: &{IP:localhost Port:52309 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/running-upgrade-120000/id_rsa Username:docker}
	W0320 17:15:04.342173   14536 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:52506->127.0.0.1:52309: write: broken pipe
	I0320 17:15:04.342185   14536 retry.go:31] will retry after 186.622111ms: ssh: handshake failed: write tcp 127.0.0.1:52506->127.0.0.1:52309: write: broken pipe
	W0320 17:15:04.368822   14536 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0320 17:15:04.368875   14536 ssh_runner.go:195] Run: systemctl --version
	I0320 17:15:04.370606   14536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0320 17:15:04.372304   14536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0320 17:15:04.372335   14536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0320 17:15:04.375218   14536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0320 17:15:04.379531   14536 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0320 17:15:04.379540   14536 start.go:494] detecting cgroup driver to use...
	I0320 17:15:04.379605   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0320 17:15:04.384417   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0320 17:15:04.388440   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0320 17:15:04.395922   14536 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0320 17:15:04.395982   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0320 17:15:04.400371   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0320 17:15:04.404516   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0320 17:15:04.407658   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0320 17:15:04.410726   14536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0320 17:15:04.413713   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0320 17:15:04.416932   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0320 17:15:04.420736   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0320 17:15:04.424106   14536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0320 17:15:04.426808   14536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0320 17:15:04.429512   14536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:04.540961   14536 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0320 17:15:04.553724   14536 start.go:494] detecting cgroup driver to use...
	I0320 17:15:04.553803   14536 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0320 17:15:04.569012   14536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0320 17:15:04.616107   14536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0320 17:15:04.643724   14536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0320 17:15:04.648609   14536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0320 17:15:04.652883   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0320 17:15:04.658594   14536 ssh_runner.go:195] Run: which cri-dockerd
	I0320 17:15:04.659912   14536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0320 17:15:04.662856   14536 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0320 17:15:04.667851   14536 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0320 17:15:04.771332   14536 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0320 17:15:04.877377   14536 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0320 17:15:04.877440   14536 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0320 17:15:04.885679   14536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:04.986936   14536 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0320 17:15:21.509916   14536 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.523606958s)
	I0320 17:15:21.510005   14536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0320 17:15:21.515319   14536 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0320 17:15:21.528245   14536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0320 17:15:21.533183   14536 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0320 17:15:21.607529   14536 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0320 17:15:21.698820   14536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:21.788336   14536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0320 17:15:21.794901   14536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0320 17:15:21.799119   14536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:21.890928   14536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0320 17:15:21.930237   14536 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0320 17:15:21.930309   14536 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0320 17:15:21.933202   14536 start.go:562] Will wait 60s for crictl version
	I0320 17:15:21.933251   14536 ssh_runner.go:195] Run: which crictl
	I0320 17:15:21.934587   14536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0320 17:15:21.945981   14536 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0320 17:15:21.946051   14536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0320 17:15:21.959240   14536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0320 17:15:21.976857   14536 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0320 17:15:21.976984   14536 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0320 17:15:21.978479   14536 kubeadm.go:877] updating cluster {Name:running-upgrade-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52377 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0320 17:15:21.978524   14536 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0320 17:15:21.978563   14536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0320 17:15:21.988743   14536 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0320 17:15:21.988755   14536 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0320 17:15:21.988801   14536 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0320 17:15:21.992184   14536 ssh_runner.go:195] Run: which lz4
	I0320 17:15:21.993431   14536 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0320 17:15:21.994692   14536 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0320 17:15:21.994704   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0320 17:15:22.719991   14536 docker.go:649] duration metric: took 726.617125ms to copy over tarball
	I0320 17:15:22.720051   14536 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0320 17:15:24.088452   14536 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.3684405s)
	I0320 17:15:24.088467   14536 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0320 17:15:24.103948   14536 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0320 17:15:24.106839   14536 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0320 17:15:24.112071   14536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:24.196364   14536 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0320 17:15:25.405175   14536 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.208841708s)
	I0320 17:15:25.405262   14536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0320 17:15:25.418527   14536 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0320 17:15:25.418537   14536 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0320 17:15:25.418542   14536 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0320 17:15:25.424645   14536 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:25.424672   14536 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:25.424738   14536 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:25.424819   14536 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:25.424837   14536 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0320 17:15:25.424875   14536 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:25.424930   14536 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:25.425160   14536 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:25.433183   14536 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:25.433224   14536 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:25.433334   14536 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0320 17:15:25.433365   14536 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:25.433571   14536 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:25.433590   14536 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:25.433755   14536 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:25.434129   14536 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:27.480453   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0320 17:15:27.497979   14536 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0320 17:15:27.498013   14536 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0320 17:15:27.498077   14536 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0320 17:15:27.510980   14536 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0320 17:15:27.511086   14536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0320 17:15:27.512897   14536 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0320 17:15:27.512907   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0320 17:15:27.521739   14536 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0320 17:15:27.521748   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0320 17:15:27.543100   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:27.559561   14536 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0320 17:15:27.559657   14536 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0320 17:15:27.559673   14536 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:27.559725   14536 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:27.570133   14536 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0320 17:15:27.573970   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:27.585016   14536 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0320 17:15:27.585039   14536 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:27.585090   14536 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:27.595328   14536 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0320 17:15:27.603548   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:27.613739   14536 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0320 17:15:27.613757   14536 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:27.613807   14536 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:27.622484   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:27.623695   14536 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0320 17:15:27.630781   14536 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0320 17:15:27.630901   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:27.630955   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:27.635107   14536 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0320 17:15:27.635126   14536 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:27.635169   14536 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:27.656713   14536 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0320 17:15:27.656733   14536 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:27.656735   14536 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0320 17:15:27.656745   14536 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:27.656787   14536 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:27.656788   14536 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:27.658048   14536 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0320 17:15:27.671418   14536 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0320 17:15:27.671426   14536 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0320 17:15:27.671511   14536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0320 17:15:27.671512   14536 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0320 17:15:27.673246   14536 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0320 17:15:27.673264   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0320 17:15:27.673285   14536 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0320 17:15:27.673295   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0320 17:15:27.743169   14536 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0320 17:15:27.743185   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0320 17:15:27.845737   14536 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0320 17:15:27.921809   14536 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0320 17:15:27.921823   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0320 17:15:28.055456   14536 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W0320 17:15:28.145932   14536 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0320 17:15:28.146072   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:28.160557   14536 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0320 17:15:28.160593   14536 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:28.160652   14536 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:28.172434   14536 cache_images.go:92] duration metric: took 2.753990958s to LoadCachedImages
	W0320 17:15:28.172479   14536 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0320 17:15:28.172484   14536 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0320 17:15:28.172526   14536 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-120000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0320 17:15:28.172586   14536 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0320 17:15:28.192256   14536 cni.go:84] Creating CNI manager for ""
	I0320 17:15:28.192271   14536 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:15:28.192276   14536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0320 17:15:28.192284   14536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-120000 NodeName:running-upgrade-120000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0320 17:15:28.192345   14536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-120000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0320 17:15:28.192408   14536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0320 17:15:28.195832   14536 binaries.go:44] Found k8s binaries, skipping transfer
	I0320 17:15:28.195861   14536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0320 17:15:28.198875   14536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0320 17:15:28.204287   14536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0320 17:15:28.209081   14536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0320 17:15:28.213994   14536 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0320 17:15:28.215144   14536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:28.294418   14536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0320 17:15:28.299322   14536 certs.go:68] Setting up /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000 for IP: 10.0.2.15
	I0320 17:15:28.299328   14536 certs.go:194] generating shared ca certs ...
	I0320 17:15:28.299336   14536 certs.go:226] acquiring lock for ca certs: {Name:mkd2fbb5d6b83ab66860b229253f0a2f91e28e51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:28.299465   14536 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.key
	I0320 17:15:28.300169   14536 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/proxy-client-ca.key
	I0320 17:15:28.300179   14536 certs.go:256] generating profile certs ...
	I0320 17:15:28.300394   14536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/client.key
	I0320 17:15:28.300414   14536 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.key.4a2d3659
	I0320 17:15:28.300426   14536 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.crt.4a2d3659 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0320 17:15:28.369805   14536 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.crt.4a2d3659 ...
	I0320 17:15:28.369814   14536 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.crt.4a2d3659: {Name:mkeb8e266eef5a9d37ac873d9939314a9ef807ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:28.370074   14536 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.key.4a2d3659 ...
	I0320 17:15:28.370079   14536 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.key.4a2d3659: {Name:mk6cd5a274a6333bd53e1a8bc233b17cafd71792 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:28.370223   14536 certs.go:381] copying /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.crt.4a2d3659 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.crt
	I0320 17:15:28.370355   14536 certs.go:385] copying /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.key.4a2d3659 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.key
	I0320 17:15:28.370611   14536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/proxy-client.key
	I0320 17:15:28.370794   14536 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/12665.pem (1338 bytes)
	W0320 17:15:28.370950   14536 certs.go:480] ignoring /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/12665_empty.pem, impossibly tiny 0 bytes
	I0320 17:15:28.370955   14536 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca-key.pem (1675 bytes)
	I0320 17:15:28.371015   14536 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem (1082 bytes)
	I0320 17:15:28.371068   14536 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem (1123 bytes)
	I0320 17:15:28.371120   14536 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/key.pem (1679 bytes)
	I0320 17:15:28.371185   14536 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem (1708 bytes)
	I0320 17:15:28.372671   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0320 17:15:28.380703   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0320 17:15:28.387579   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0320 17:15:28.394645   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0320 17:15:28.401304   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0320 17:15:28.407634   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0320 17:15:28.414266   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0320 17:15:28.421071   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0320 17:15:28.427789   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0320 17:15:28.434961   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/12665.pem --> /usr/share/ca-certificates/12665.pem (1338 bytes)
	I0320 17:15:28.441409   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem --> /usr/share/ca-certificates/126652.pem (1708 bytes)
	I0320 17:15:28.447898   14536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0320 17:15:28.452938   14536 ssh_runner.go:195] Run: openssl version
	I0320 17:15:28.454724   14536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0320 17:15:28.457750   14536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0320 17:15:28.459168   14536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 21 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0320 17:15:28.459187   14536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0320 17:15:28.461008   14536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0320 17:15:28.463806   14536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12665.pem && ln -fs /usr/share/ca-certificates/12665.pem /etc/ssl/certs/12665.pem"
	I0320 17:15:28.467043   14536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12665.pem
	I0320 17:15:28.468620   14536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 21 00:02 /usr/share/ca-certificates/12665.pem
	I0320 17:15:28.468641   14536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12665.pem
	I0320 17:15:28.470663   14536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12665.pem /etc/ssl/certs/51391683.0"
	I0320 17:15:28.473492   14536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126652.pem && ln -fs /usr/share/ca-certificates/126652.pem /etc/ssl/certs/126652.pem"
	I0320 17:15:28.476496   14536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126652.pem
	I0320 17:15:28.478082   14536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 21 00:02 /usr/share/ca-certificates/126652.pem
	I0320 17:15:28.478101   14536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126652.pem
	I0320 17:15:28.479763   14536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126652.pem /etc/ssl/certs/3ec20f2e.0"
	I0320 17:15:28.482783   14536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0320 17:15:28.484217   14536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0320 17:15:28.485986   14536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0320 17:15:28.487604   14536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0320 17:15:28.489517   14536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0320 17:15:28.492030   14536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0320 17:15:28.493801   14536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0320 17:15:28.495672   14536 kubeadm.go:391] StartCluster: {Name:running-upgrade-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52377 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0320 17:15:28.495742   14536 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0320 17:15:28.506717   14536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0320 17:15:28.509729   14536 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0320 17:15:28.509736   14536 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0320 17:15:28.509738   14536 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0320 17:15:28.509763   14536 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0320 17:15:28.512977   14536 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0320 17:15:28.513424   14536 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-120000" does not appear in /Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:15:28.513532   14536 kubeconfig.go:62] /Users/jenkins/minikube-integration/17713-12248/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-120000" cluster setting kubeconfig missing "running-upgrade-120000" context setting]
	I0320 17:15:28.513728   14536 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/kubeconfig: {Name:mk87d9a95abf455216479b90714c3c5b079a6861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:28.514323   14536 kapi.go:59] client config for running-upgrade-120000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/client.key", CAFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10427ea80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0320 17:15:28.514767   14536 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0320 17:15:28.517681   14536 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-120000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0320 17:15:28.517686   14536 kubeadm.go:1154] stopping kube-system containers ...
	I0320 17:15:28.517727   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0320 17:15:28.530517   14536 docker.go:483] Stopping containers: [4547de6e8257 a3b0979ad580 c543a8961041 ebe1641c4b92 e78ca58831f2 4e99a40742ea 42b87e69a0de 21c908c7689b 1077c3ad79cc b64b6f297233 9fa8564030f4 4c9bf4f7f50b c7d7f3fe21ac 5bcf20efa75f c6cd4fa42b8a e9c88a954eeb 8322a1713029 85bde627ac6c a7bf1291ce82 b95a6d9d3140 24ffbf9ce89f a75f28371b8a 6f17617dea23]
	I0320 17:15:28.530589   14536 ssh_runner.go:195] Run: docker stop 4547de6e8257 a3b0979ad580 c543a8961041 ebe1641c4b92 e78ca58831f2 4e99a40742ea 42b87e69a0de 21c908c7689b 1077c3ad79cc b64b6f297233 9fa8564030f4 4c9bf4f7f50b c7d7f3fe21ac 5bcf20efa75f c6cd4fa42b8a e9c88a954eeb 8322a1713029 85bde627ac6c a7bf1291ce82 b95a6d9d3140 24ffbf9ce89f a75f28371b8a 6f17617dea23
	I0320 17:15:28.542691   14536 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0320 17:15:28.627750   14536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0320 17:15:28.631555   14536 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Mar 21 00:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Mar 21 00:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar 21 00:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Mar 21 00:14 /etc/kubernetes/scheduler.conf
	
	I0320 17:15:28.631582   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/admin.conf
	I0320 17:15:28.634763   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0320 17:15:28.634801   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0320 17:15:28.637485   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/kubelet.conf
	I0320 17:15:28.639951   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0320 17:15:28.639977   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0320 17:15:28.642800   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/controller-manager.conf
	I0320 17:15:28.645454   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0320 17:15:28.645483   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0320 17:15:28.648004   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/scheduler.conf
	I0320 17:15:28.650937   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0320 17:15:28.650956   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0320 17:15:28.653585   14536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0320 17:15:28.656260   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:28.686818   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:29.075616   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:29.327829   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:29.355684   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:29.381847   14536 api_server.go:52] waiting for apiserver process to appear ...
	I0320 17:15:29.381941   14536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:29.883961   14536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:30.383978   14536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:30.883979   14536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:31.383895   14536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:31.883888   14536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:31.888141   14536 api_server.go:72] duration metric: took 2.5063855s to wait for apiserver process to appear ...
	I0320 17:15:31.888153   14536 api_server.go:88] waiting for apiserver healthz status ...
	I0320 17:15:31.888170   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:36.890041   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:36.890070   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:41.890300   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:41.890322   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:46.890462   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:46.890510   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:51.891106   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:51.891160   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:56.891567   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:56.891613   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:01.892283   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:01.892340   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:06.893766   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:06.893834   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:11.895435   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:11.895474   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:16.897479   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:16.897523   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:21.899004   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:21.899075   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:26.901441   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:26.901519   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:31.903849   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:31.904065   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:31.929158   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:16:31.929299   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:31.947686   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:16:31.947770   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:31.960842   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:16:31.960930   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:31.972915   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:16:31.972985   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:31.982999   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:16:31.983058   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:31.993563   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:16:31.993645   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:32.003919   14536 logs.go:276] 0 containers: []
	W0320 17:16:32.003933   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:32.003990   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:32.014954   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:16:32.014972   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:16:32.014978   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:16:32.027383   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:32.027396   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:32.126799   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:16:32.126813   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:16:32.166669   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:16:32.166680   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:16:32.178724   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:16:32.178737   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:16:32.190737   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:16:32.190754   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:16:32.209038   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:16:32.209047   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:16:32.224606   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:16:32.224880   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:16:32.237441   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:16:32.237453   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:16:32.257938   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:16:32.257949   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:16:32.269307   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:16:32.269318   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:16:32.282057   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:16:32.282069   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:16:32.293706   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:16:32.293718   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:16:32.305304   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:16:32.305314   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:32.317439   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:32.317452   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:32.322460   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:16:32.322466   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:16:32.336748   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:16:32.336761   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:16:32.350603   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:32.350616   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:32.390754   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:32.390765   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:34.916830   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:39.919385   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:39.919654   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:39.944683   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:16:39.944802   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:39.963657   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:16:39.963756   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:39.990796   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:16:39.990867   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:40.016529   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:16:40.016604   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:40.028850   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:16:40.028920   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:40.040081   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:16:40.040143   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:40.053598   14536 logs.go:276] 0 containers: []
	W0320 17:16:40.053610   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:40.053669   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:40.068676   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:16:40.068692   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:16:40.068698   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:16:40.086260   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:16:40.086271   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:16:40.098479   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:16:40.098490   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:16:40.110559   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:16:40.110572   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:16:40.127455   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:16:40.127467   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:16:40.139235   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:16:40.139245   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:40.151153   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:40.151165   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:40.191071   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:16:40.191079   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:16:40.227709   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:16:40.227722   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:16:40.242361   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:40.242373   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:40.270298   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:40.270309   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:40.308616   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:16:40.308628   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:16:40.322822   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:16:40.322833   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:16:40.337225   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:16:40.337236   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:16:40.349079   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:16:40.349090   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:16:40.361231   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:16:40.361242   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:16:40.372619   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:16:40.372632   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:16:40.384129   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:40.384139   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:40.388686   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:16:40.388695   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:16:42.907878   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:47.908546   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:47.908851   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:47.944645   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:16:47.944787   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:47.965154   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:16:47.965251   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:47.981284   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:16:47.981354   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:47.993378   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:16:47.993454   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:48.003984   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:16:48.004044   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:48.014622   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:16:48.014685   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:48.028756   14536 logs.go:276] 0 containers: []
	W0320 17:16:48.028767   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:48.028826   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:48.039787   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:16:48.039806   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:16:48.039811   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:16:48.080229   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:16:48.080251   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:16:48.093697   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:48.093711   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:48.120874   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:16:48.120888   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:16:48.133501   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:48.133514   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:48.168785   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:16:48.168796   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:16:48.182860   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:16:48.182874   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:16:48.194923   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:16:48.194933   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:16:48.206640   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:16:48.206651   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:16:48.218011   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:16:48.218025   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:16:48.229342   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:48.229361   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:48.234082   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:16:48.234091   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:16:48.248043   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:16:48.248054   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:16:48.270255   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:16:48.270265   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:16:48.281310   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:16:48.281323   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:16:48.304857   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:48.304868   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:48.344803   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:16:48.344815   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:16:48.368947   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:16:48.368959   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:16:48.381084   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:16:48.381095   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:50.893654   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:55.895740   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:55.896118   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:55.930031   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:16:55.930164   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:55.950399   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:16:55.950490   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:55.964892   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:16:55.964972   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:55.977180   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:16:55.977242   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:55.987411   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:16:55.987478   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:55.997910   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:16:55.997969   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:56.008840   14536 logs.go:276] 0 containers: []
	W0320 17:16:56.008850   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:56.008902   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:56.019366   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:16:56.019380   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:56.019385   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:56.060978   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:16:56.060991   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:16:56.072806   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:16:56.072823   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:56.086012   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:16:56.086023   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:16:56.098048   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:56.098060   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:56.124119   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:16:56.124129   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:16:56.139286   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:16:56.139297   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:16:56.150877   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:16:56.150887   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:16:56.167300   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:16:56.167312   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:16:56.181853   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:16:56.181865   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:16:56.199159   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:16:56.199170   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:16:56.210309   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:16:56.210320   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:16:56.224682   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:16:56.224693   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:16:56.262037   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:16:56.262048   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:16:56.276151   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:16:56.276162   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:16:56.288291   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:56.288301   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:56.293249   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:56.293256   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:56.327633   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:16:56.327645   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:16:56.342467   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:16:56.342478   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:16:58.856430   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:03.857168   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:03.857363   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:03.878697   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:03.878813   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:03.893715   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:03.893791   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:03.906498   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:03.906579   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:03.917730   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:03.917801   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:03.928379   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:03.928453   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:03.939226   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:03.939302   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:03.949059   14536 logs.go:276] 0 containers: []
	W0320 17:17:03.949071   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:03.949118   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:03.959765   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:03.959784   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:03.959790   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:03.974044   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:03.974054   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:03.985459   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:03.985469   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:03.996398   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:03.996412   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:04.027550   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:04.027561   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:04.041443   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:04.041454   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:04.076755   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:04.076770   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:04.116508   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:04.116527   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:04.128224   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:04.128236   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:04.140037   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:04.140049   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:04.151549   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:04.151565   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:04.192393   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:04.192413   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:04.222122   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:04.222137   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:04.251134   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:04.251145   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:17:04.262903   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:04.262914   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:04.280785   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:04.280797   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:04.293176   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:04.293188   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:04.298021   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:04.298027   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:04.316858   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:04.316869   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:06.830344   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:11.832794   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:11.832959   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:11.849059   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:11.849132   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:11.864930   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:11.865003   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:11.875525   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:11.875594   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:11.885892   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:11.885955   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:11.900380   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:11.900447   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:11.911048   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:11.911118   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:11.921656   14536 logs.go:276] 0 containers: []
	W0320 17:17:11.921669   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:11.921720   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:11.932276   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:11.932294   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:11.932301   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:11.943569   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:11.943581   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:11.983792   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:11.983802   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:11.997646   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:11.997658   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:12.009317   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:12.009327   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:17:12.021328   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:12.021337   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:12.033037   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:12.033052   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:12.062333   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:12.062343   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:12.074448   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:12.074459   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:12.091230   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:12.091246   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:12.095675   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:12.095682   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:12.111707   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:12.111716   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:12.149730   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:12.149743   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:12.163471   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:12.163480   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:12.177898   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:12.177910   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:12.194772   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:12.194781   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:12.229938   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:12.229949   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:12.241649   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:12.241659   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:12.252767   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:12.252777   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:14.767089   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:19.769267   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:19.769377   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:19.780406   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:19.780484   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:19.791415   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:19.791494   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:19.801758   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:19.801832   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:19.811558   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:19.811615   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:19.821945   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:19.822015   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:19.832498   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:19.832568   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:19.842839   14536 logs.go:276] 0 containers: []
	W0320 17:17:19.842851   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:19.842910   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:19.853267   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:19.853283   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:19.853288   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:19.892303   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:19.892315   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:19.904361   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:19.904375   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:19.916170   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:19.916183   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:19.928026   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:19.928037   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:19.966015   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:19.966026   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:19.979207   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:19.979216   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:19.990441   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:19.990456   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:20.002199   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:20.002210   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:20.006480   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:20.006487   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:20.017415   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:20.017425   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:20.034698   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:20.034709   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:20.051885   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:20.051895   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:20.094554   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:20.094566   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:20.108466   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:20.108478   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:20.122440   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:20.122453   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:20.136484   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:20.136497   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:20.148765   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:20.148776   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:17:20.160833   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:20.160844   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:22.689334   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:27.691832   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:27.692050   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:27.713549   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:27.713643   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:27.728283   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:27.728359   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:27.740559   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:27.740625   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:27.753273   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:27.753347   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:27.763994   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:27.764071   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:27.774232   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:27.774303   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:27.784388   14536 logs.go:276] 0 containers: []
	W0320 17:17:27.784398   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:27.784452   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:27.795812   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:27.795826   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:27.795831   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:27.812543   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:27.812555   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:27.839370   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:27.839380   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:27.857367   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:27.857378   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:27.869382   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:27.869392   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:27.880644   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:27.880655   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:27.891905   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:27.891916   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:27.905227   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:27.905237   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:27.917096   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:27.917107   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:27.929627   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:27.929638   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:27.970831   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:27.970843   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:28.007311   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:28.007324   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:28.018169   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:28.018181   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:17:28.030878   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:28.030887   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:28.066910   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:28.066921   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:28.083235   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:28.083245   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:28.104582   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:28.104594   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:28.121368   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:28.121380   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:28.126063   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:28.126073   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:30.641970   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:35.644166   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:35.644307   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:35.658588   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:35.658669   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:35.670777   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:35.670849   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:35.681241   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:35.681312   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:35.691940   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:35.692008   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:35.702139   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:35.702203   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:35.718444   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:35.718520   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:35.729127   14536 logs.go:276] 0 containers: []
	W0320 17:17:35.729139   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:35.729197   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:35.739405   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:35.739423   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:35.739428   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:35.751351   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:35.751363   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:35.763268   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:35.763285   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:35.789265   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:35.789274   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:35.793571   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:35.793581   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:35.804987   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:35.804997   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:35.821901   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:35.821910   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:17:35.834028   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:35.834040   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:35.849791   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:35.849802   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:35.861854   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:35.861864   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:35.873060   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:35.873071   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:35.884425   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:35.884438   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:35.897946   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:35.897955   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:35.934208   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:35.934219   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:35.949529   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:35.949540   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:35.964283   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:35.964292   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:36.005695   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:36.005704   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:36.041212   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:36.041225   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:36.058909   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:36.058922   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:38.577324   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:43.579692   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:43.579877   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:43.596176   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:43.596286   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:43.609786   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:43.609880   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:43.620899   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:43.620974   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:43.632235   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:43.632320   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:43.643675   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:43.643760   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:43.655098   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:43.655184   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:43.665339   14536 logs.go:276] 0 containers: []
	W0320 17:17:43.665353   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:43.665424   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:43.676258   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:43.676276   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:43.676282   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:43.702938   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:43.702950   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:43.708199   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:43.708205   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:43.744908   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:43.744922   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:43.773942   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:43.773952   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:43.786765   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:43.786777   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:43.813901   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:43.813910   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:43.825159   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:43.825171   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:43.862525   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:43.862538   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:43.874298   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:43.874311   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:17:43.891131   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:43.891143   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:43.902326   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:43.902338   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:43.922196   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:43.922208   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:43.962933   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:43.962948   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:43.976851   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:43.976861   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:43.991070   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:43.991084   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:44.002885   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:44.002898   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:44.014470   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:44.014483   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:44.030686   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:44.030698   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:46.545128   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:51.547737   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:51.548204   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:51.590953   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:51.591095   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:51.611983   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:51.612088   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:51.626919   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:51.627007   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:51.640076   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:51.640145   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:51.651825   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:51.651899   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:51.666838   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:51.666911   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:51.676735   14536 logs.go:276] 0 containers: []
	W0320 17:17:51.676750   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:51.676808   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:51.687210   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:51.687227   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:51.687233   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:51.701788   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:51.701798   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:51.713715   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:51.713729   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:51.724627   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:51.724641   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:51.735920   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:51.735932   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:51.740446   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:51.740455   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:51.775305   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:51.775315   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:51.792269   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:51.792278   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:51.803931   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:51.803941   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:51.829506   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:51.829517   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:51.843041   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:51.843050   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:51.854861   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:51.854874   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:51.866484   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:51.866498   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:51.884038   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:51.884049   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:51.925799   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:51.925807   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:51.964579   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:51.964589   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:51.978609   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:51.978622   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:51.990107   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:51.990118   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:17:52.001662   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:52.001672   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:54.514325   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:59.516883   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:59.517213   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:59.550284   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:59.550479   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:59.571634   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:59.571725   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:59.585253   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:59.585326   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:59.597659   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:59.597731   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:59.608627   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:59.608695   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:59.619456   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:59.619528   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:59.630299   14536 logs.go:276] 0 containers: []
	W0320 17:17:59.630310   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:59.630368   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:59.641597   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:59.641613   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:59.641618   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:59.681249   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:59.681265   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:59.699625   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:59.699636   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:59.710670   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:59.710682   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:59.723031   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:59.723042   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:59.766145   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:59.766157   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:59.780306   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:59.780315   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:59.794513   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:59.794525   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:59.806249   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:59.806260   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:59.817825   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:59.817836   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:59.841995   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:59.842003   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:59.855926   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:59.855937   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:59.872345   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:59.872358   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:59.890040   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:59.890049   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:59.902526   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:59.902538   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:59.907501   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:59.907508   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:59.947464   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:59.947477   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:59.959602   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:59.959613   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:59.974317   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:59.974327   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:02.491795   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:07.494152   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:07.494502   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:07.524035   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:18:07.524164   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:07.541438   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:18:07.541521   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:07.556722   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:18:07.556807   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:07.568289   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:18:07.568363   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:07.579493   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:18:07.579563   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:07.596692   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:18:07.596762   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:07.607180   14536 logs.go:276] 0 containers: []
	W0320 17:18:07.607194   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:07.607255   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:07.617542   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:18:07.617558   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:18:07.617563   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:18:07.628824   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:18:07.628840   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:18:07.640531   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:07.640544   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:07.664267   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:18:07.664274   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:18:07.680852   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:18:07.680862   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:07.692518   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:18:07.692529   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:18:07.714338   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:18:07.714350   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:18:07.726351   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:18:07.726364   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:18:07.737249   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:07.737261   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:07.741685   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:18:07.741694   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:18:07.755851   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:18:07.755865   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:18:07.792961   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:18:07.792974   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:18:07.804616   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:18:07.804627   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:18:07.815589   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:18:07.815606   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:07.827663   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:07.827673   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:07.862647   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:18:07.862658   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:18:07.877399   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:18:07.877408   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:18:07.891791   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:07.891803   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:07.931864   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:18:07.931880   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:18:10.447538   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:15.448552   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:15.448811   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:15.475706   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:18:15.475827   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:15.497220   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:18:15.497295   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:15.510172   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:18:15.510247   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:15.522001   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:18:15.522063   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:15.533853   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:18:15.533922   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:15.545936   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:18:15.546014   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:15.559510   14536 logs.go:276] 0 containers: []
	W0320 17:18:15.559521   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:15.559573   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:15.570857   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:18:15.570871   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:18:15.570877   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:18:15.582849   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:18:15.582863   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:18:15.595992   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:15.596002   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:15.635475   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:18:15.635484   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:18:15.653066   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:18:15.653077   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:18:15.690668   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:18:15.690680   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:18:15.705930   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:15.705945   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:15.710673   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:18:15.710681   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:15.723181   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:15.723191   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:15.746027   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:15.746038   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:15.783366   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:18:15.783379   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:18:15.795786   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:18:15.795798   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:18:15.813089   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:18:15.813098   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:18:15.828132   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:18:15.828147   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:18:15.839625   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:18:15.839639   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:15.852487   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:18:15.852501   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:18:15.867417   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:18:15.867428   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:18:15.879171   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:18:15.879181   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:18:15.897313   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:18:15.897324   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:18:18.410906   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:23.413091   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:23.413420   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:23.452761   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:18:23.452909   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:23.473291   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:18:23.473380   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:23.487857   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:18:23.487933   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:23.500337   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:18:23.500407   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:23.512241   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:18:23.512311   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:23.523102   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:18:23.523174   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:23.533757   14536 logs.go:276] 0 containers: []
	W0320 17:18:23.533769   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:23.533825   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:23.544613   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:18:23.544632   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:18:23.544637   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:18:23.568551   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:23.568562   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:23.574027   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:23.574038   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:23.612449   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:18:23.612460   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:18:23.624261   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:18:23.624274   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:18:23.636014   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:23.636024   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:23.659949   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:23.659957   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:23.701208   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:18:23.701228   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:18:23.713188   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:18:23.713201   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:18:23.725572   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:18:23.725586   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:18:23.743246   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:18:23.743257   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:18:23.755255   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:18:23.755265   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:18:23.769665   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:18:23.769679   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:18:23.808057   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:18:23.808068   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:18:23.822925   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:18:23.822936   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:18:23.837805   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:18:23.837816   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:18:23.849699   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:18:23.849710   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:18:23.862039   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:18:23.862051   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:23.875116   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:18:23.875127   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:26.388345   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:31.390030   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:31.390155   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:31.402216   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:18:31.402295   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:31.415105   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:18:31.415170   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:31.427724   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:18:31.427803   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:31.438876   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:18:31.438937   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:31.449999   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:18:31.450067   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:31.461302   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:18:31.461370   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:31.472619   14536 logs.go:276] 0 containers: []
	W0320 17:18:31.472636   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:31.472693   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:31.485732   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:18:31.485752   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:18:31.485757   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:18:31.501089   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:18:31.501099   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:18:31.539142   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:18:31.539153   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:18:31.552497   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:18:31.552516   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:18:31.566958   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:18:31.566971   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:31.579607   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:31.579618   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:31.619897   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:31.619912   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:31.624370   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:31.624379   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:31.662379   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:18:31.662390   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:18:31.677027   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:18:31.677040   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:18:31.688573   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:18:31.688583   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:18:31.700590   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:18:31.700604   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:18:31.712440   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:18:31.712450   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:18:31.724218   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:18:31.724229   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:18:31.739486   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:31.739495   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:31.764136   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:18:31.764149   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:18:31.782432   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:18:31.782443   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:31.795038   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:18:31.795051   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:18:31.814354   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:18:31.814366   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:18:34.328078   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:39.330227   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:39.330404   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:39.347161   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:18:39.347245   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:39.362542   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:18:39.362611   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:39.373450   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:18:39.373522   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:39.385304   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:18:39.385373   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:39.397221   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:18:39.397297   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:39.408637   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:18:39.408710   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:39.421993   14536 logs.go:276] 0 containers: []
	W0320 17:18:39.422005   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:39.422063   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:39.433310   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:18:39.433327   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:18:39.433332   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:18:39.445312   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:18:39.445327   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:18:39.456771   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:18:39.456784   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:18:39.475552   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:39.475563   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:39.479919   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:18:39.479927   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:18:39.516990   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:18:39.517001   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:18:39.529185   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:18:39.529195   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:18:39.546912   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:18:39.546925   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:18:39.560813   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:18:39.560824   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:18:39.580536   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:18:39.580547   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:18:39.592670   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:39.592682   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:39.615739   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:39.615747   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:39.655320   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:18:39.655330   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:39.667886   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:18:39.667899   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:18:39.679774   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:18:39.679786   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:39.692011   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:39.692021   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:39.732116   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:18:39.732131   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:18:39.748639   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:18:39.748650   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:18:39.763532   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:18:39.763545   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:18:42.280835   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:47.281646   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:47.281812   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:47.295660   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:18:47.295758   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:47.308269   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:18:47.308341   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:47.319023   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:18:47.319114   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:47.330207   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:18:47.330280   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:47.340586   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:18:47.340653   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:47.351464   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:18:47.351530   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:47.362725   14536 logs.go:276] 0 containers: []
	W0320 17:18:47.362736   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:47.362787   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:47.373424   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:18:47.373441   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:18:47.373446   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:18:47.388099   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:18:47.388111   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:18:47.399366   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:18:47.399381   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:18:47.410873   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:18:47.410883   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:18:47.424814   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:18:47.424827   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:18:47.436488   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:47.436500   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:47.459092   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:47.459099   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:47.495031   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:18:47.495043   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:18:47.509353   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:18:47.509364   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:18:47.525913   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:18:47.525922   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:18:47.551146   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:18:47.551156   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:18:47.567599   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:18:47.567611   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:47.580212   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:47.580223   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:47.584823   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:18:47.584831   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:18:47.598810   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:18:47.598824   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:18:47.634979   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:18:47.634990   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:47.646882   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:47.646897   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:47.688393   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:18:47.688402   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:18:47.702118   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:18:47.702129   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:18:50.214620   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:55.216858   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:55.217272   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:55.257922   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:18:55.258064   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:55.278867   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:18:55.278966   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:55.296120   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:18:55.296202   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:55.311362   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:18:55.311434   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:55.336040   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:18:55.336113   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:55.346483   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:18:55.346565   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:55.356396   14536 logs.go:276] 0 containers: []
	W0320 17:18:55.356408   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:55.356465   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:55.366884   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:18:55.366902   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:18:55.366908   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:18:55.380899   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:18:55.380911   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:18:55.395997   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:18:55.396006   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:18:55.412203   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:18:55.412214   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:18:55.428777   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:18:55.428793   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:55.441102   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:18:55.441113   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:18:55.480685   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:18:55.480694   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:18:55.491933   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:18:55.491946   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:18:55.503667   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:18:55.503676   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:55.515198   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:18:55.515212   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:18:55.526832   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:55.526845   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:55.531128   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:55.531136   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:55.567543   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:18:55.567554   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:18:55.579236   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:18:55.579245   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:18:55.590855   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:55.590866   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:55.614556   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:55.614564   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:55.655506   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:18:55.655513   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:18:55.670969   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:18:55.670979   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:18:55.682617   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:18:55.682629   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:18:58.202206   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:03.204298   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:03.204548   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:19:03.230524   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:19:03.230641   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:19:03.247680   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:19:03.247759   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:19:03.260974   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:19:03.261061   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:19:03.272500   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:19:03.272565   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:19:03.282632   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:19:03.282712   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:19:03.293575   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:19:03.293651   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:19:03.303740   14536 logs.go:276] 0 containers: []
	W0320 17:19:03.303756   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:19:03.303823   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:19:03.315172   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:19:03.315189   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:19:03.315195   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:19:03.330018   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:19:03.330028   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:19:03.342301   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:19:03.342314   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:19:03.355433   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:19:03.355445   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:19:03.367384   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:19:03.367394   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:19:03.378669   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:19:03.378681   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:19:03.401880   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:19:03.401892   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:19:03.415683   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:19:03.415694   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:19:03.452757   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:19:03.452767   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:19:03.466701   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:19:03.466714   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:19:03.503783   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:19:03.503794   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:19:03.515800   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:19:03.515812   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:19:03.527333   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:19:03.527344   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:19:03.531542   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:19:03.531550   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:19:03.548302   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:19:03.548312   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:19:03.566618   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:19:03.566629   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:19:03.578170   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:19:03.578180   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:19:03.591668   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:19:03.591678   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:19:03.631662   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:19:03.631669   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:19:06.147405   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:11.149964   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:11.150399   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:19:11.187604   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:19:11.187727   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:19:11.208786   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:19:11.208883   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:19:11.223959   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:19:11.224041   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:19:11.236589   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:19:11.236658   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:19:11.247825   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:19:11.247895   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:19:11.258707   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:19:11.258773   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:19:11.268644   14536 logs.go:276] 0 containers: []
	W0320 17:19:11.268657   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:19:11.268715   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:19:11.279472   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:19:11.279488   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:19:11.279494   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:19:11.313502   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:19:11.313515   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:19:11.327528   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:19:11.327539   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:19:11.339153   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:19:11.339167   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:19:11.351684   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:19:11.351695   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:19:11.356216   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:19:11.356223   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:19:11.368065   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:19:11.368076   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:19:11.383879   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:19:11.383893   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:19:11.395289   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:19:11.395300   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:19:11.409945   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:19:11.409955   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:19:11.430804   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:19:11.430815   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:19:11.442877   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:19:11.442888   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:19:11.454543   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:19:11.454554   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:19:11.466153   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:19:11.466163   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:19:11.488670   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:19:11.488680   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:19:11.526054   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:19:11.526067   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:19:11.540546   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:19:11.540556   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:19:11.554849   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:19:11.554860   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:19:11.572512   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:19:11.572523   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:19:14.114712   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:19.116802   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:19.116934   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:19:19.128318   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:19:19.128389   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:19:19.140508   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:19:19.140576   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:19:19.152679   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:19:19.152766   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:19:19.165848   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:19:19.165920   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:19:19.177509   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:19:19.177593   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:19:19.190419   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:19:19.190493   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:19:19.201436   14536 logs.go:276] 0 containers: []
	W0320 17:19:19.201449   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:19:19.201514   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:19:19.214482   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:19:19.214501   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:19:19.214506   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:19:19.227801   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:19:19.227816   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:19:19.240045   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:19:19.240071   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:19:19.253556   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:19:19.253570   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:19:19.272648   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:19:19.272665   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:19:19.286386   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:19:19.286401   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:19:19.300998   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:19:19.301014   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:19:19.346699   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:19:19.346715   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:19:19.383874   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:19:19.383894   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:19:19.403293   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:19:19.403307   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:19:19.415129   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:19:19.415144   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:19:19.444855   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:19:19.444869   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:19:19.460775   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:19:19.460791   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:19:19.500642   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:19:19.500660   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:19:19.528670   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:19:19.528685   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:19:19.545840   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:19:19.545854   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:19:19.571748   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:19:19.571765   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:19:19.584894   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:19:19.584908   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:19:19.589565   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:19:19.589574   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:19:22.106018   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:27.106107   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:27.106338   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:19:27.130233   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:19:27.130332   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:19:27.149158   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:19:27.149233   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:19:27.161646   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:19:27.161727   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:19:27.173488   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:19:27.173561   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:19:27.185540   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:19:27.185603   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:19:27.196457   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:19:27.196522   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:19:27.206681   14536 logs.go:276] 0 containers: []
	W0320 17:19:27.206694   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:19:27.206748   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:19:27.217373   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:19:27.217389   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:19:27.217394   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:19:27.251272   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:19:27.251287   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:19:27.262986   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:19:27.262996   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:19:27.304831   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:19:27.304843   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:19:27.309107   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:19:27.309112   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:19:27.346136   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:19:27.346148   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:19:27.360593   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:19:27.360607   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:19:27.382463   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:19:27.382478   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:19:27.393909   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:19:27.393922   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:19:27.405842   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:19:27.405852   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:19:27.420025   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:19:27.420037   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:19:27.434801   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:19:27.434812   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:19:27.448401   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:19:27.448412   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:19:27.459581   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:19:27.459592   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:19:27.470832   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:19:27.470844   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:19:27.482286   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:19:27.482297   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:19:27.498823   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:19:27.498834   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:19:27.515348   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:19:27.515358   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:19:27.526893   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:19:27.526907   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:19:30.049348   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:35.049906   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:35.049942   14536 kubeadm.go:591] duration metric: took 4m6.549813916s to restartPrimaryControlPlane
	W0320 17:19:35.049974   14536 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0320 17:19:35.049988   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0320 17:19:36.135240   14536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.085279167s)
	I0320 17:19:36.135309   14536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0320 17:19:36.140421   14536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0320 17:19:36.143438   14536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0320 17:19:36.146443   14536 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0320 17:19:36.146451   14536 kubeadm.go:156] found existing configuration files:
	
	I0320 17:19:36.146482   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/admin.conf
	I0320 17:19:36.149781   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0320 17:19:36.149818   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0320 17:19:36.152345   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/kubelet.conf
	I0320 17:19:36.154782   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0320 17:19:36.154805   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0320 17:19:36.157865   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/controller-manager.conf
	I0320 17:19:36.160914   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0320 17:19:36.160958   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0320 17:19:36.163736   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/scheduler.conf
	I0320 17:19:36.166375   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0320 17:19:36.166398   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0320 17:19:36.170227   14536 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0320 17:19:36.186053   14536 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0320 17:19:36.186084   14536 kubeadm.go:309] [preflight] Running pre-flight checks
	I0320 17:19:36.232485   14536 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0320 17:19:36.232542   14536 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0320 17:19:36.232588   14536 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0320 17:19:36.284513   14536 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0320 17:19:36.287559   14536 out.go:204]   - Generating certificates and keys ...
	I0320 17:19:36.287595   14536 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0320 17:19:36.287631   14536 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0320 17:19:36.287701   14536 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0320 17:19:36.287734   14536 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0320 17:19:36.287770   14536 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0320 17:19:36.287800   14536 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0320 17:19:36.287837   14536 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0320 17:19:36.287875   14536 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0320 17:19:36.287919   14536 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0320 17:19:36.287961   14536 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0320 17:19:36.287982   14536 kubeadm.go:309] [certs] Using the existing "sa" key
	I0320 17:19:36.288009   14536 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0320 17:19:36.369937   14536 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0320 17:19:36.505255   14536 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0320 17:19:36.616076   14536 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0320 17:19:36.683609   14536 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0320 17:19:36.713184   14536 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0320 17:19:36.713423   14536 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0320 17:19:36.713476   14536 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0320 17:19:36.795412   14536 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0320 17:19:36.801089   14536 out.go:204]   - Booting up control plane ...
	I0320 17:19:36.801145   14536 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0320 17:19:36.801186   14536 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0320 17:19:36.801229   14536 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0320 17:19:36.801270   14536 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0320 17:19:36.801432   14536 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0320 17:19:41.302530   14536 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501718 seconds
	I0320 17:19:41.302630   14536 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0320 17:19:41.306320   14536 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0320 17:19:41.825812   14536 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0320 17:19:41.826209   14536 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-120000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0320 17:19:42.329338   14536 kubeadm.go:309] [bootstrap-token] Using token: 8mr0af.bvjv90spdzjaujy3
	I0320 17:19:42.335328   14536 out.go:204]   - Configuring RBAC rules ...
	I0320 17:19:42.335381   14536 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0320 17:19:42.337586   14536 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0320 17:19:42.343377   14536 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0320 17:19:42.344456   14536 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0320 17:19:42.345201   14536 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0320 17:19:42.346104   14536 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0320 17:19:42.349006   14536 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0320 17:19:42.521436   14536 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0320 17:19:42.740060   14536 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0320 17:19:42.740498   14536 kubeadm.go:309] 
	I0320 17:19:42.740529   14536 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0320 17:19:42.740532   14536 kubeadm.go:309] 
	I0320 17:19:42.740571   14536 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0320 17:19:42.740574   14536 kubeadm.go:309] 
	I0320 17:19:42.740585   14536 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0320 17:19:42.740615   14536 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0320 17:19:42.740649   14536 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0320 17:19:42.740655   14536 kubeadm.go:309] 
	I0320 17:19:42.740687   14536 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0320 17:19:42.740691   14536 kubeadm.go:309] 
	I0320 17:19:42.740712   14536 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0320 17:19:42.740714   14536 kubeadm.go:309] 
	I0320 17:19:42.740738   14536 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0320 17:19:42.740788   14536 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0320 17:19:42.740831   14536 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0320 17:19:42.740838   14536 kubeadm.go:309] 
	I0320 17:19:42.740883   14536 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0320 17:19:42.740936   14536 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0320 17:19:42.740944   14536 kubeadm.go:309] 
	I0320 17:19:42.740990   14536 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 8mr0af.bvjv90spdzjaujy3 \
	I0320 17:19:42.741055   14536 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:185f26a629853f25fceceb0266b7da38a73499cb5a874e544b63fe7b511dff93 \
	I0320 17:19:42.741068   14536 kubeadm.go:309] 	--control-plane 
	I0320 17:19:42.741072   14536 kubeadm.go:309] 
	I0320 17:19:42.741119   14536 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0320 17:19:42.741123   14536 kubeadm.go:309] 
	I0320 17:19:42.741176   14536 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 8mr0af.bvjv90spdzjaujy3 \
	I0320 17:19:42.741233   14536 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:185f26a629853f25fceceb0266b7da38a73499cb5a874e544b63fe7b511dff93 
	I0320 17:19:42.741311   14536 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0320 17:19:42.741319   14536 cni.go:84] Creating CNI manager for ""
	I0320 17:19:42.741326   14536 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:19:42.744750   14536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0320 17:19:42.747594   14536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0320 17:19:42.750671   14536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0320 17:19:42.755355   14536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0320 17:19:42.755402   14536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0320 17:19:42.755474   14536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-120000 minikube.k8s.io/updated_at=2024_03_20T17_19_42_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=9b0fc36eb9ca8470d9076ea1f55dc7d08f560c77 minikube.k8s.io/name=running-upgrade-120000 minikube.k8s.io/primary=true
	I0320 17:19:42.789289   14536 ops.go:34] apiserver oom_adj: -16
	I0320 17:19:42.789371   14536 kubeadm.go:1107] duration metric: took 34.010583ms to wait for elevateKubeSystemPrivileges
	W0320 17:19:42.800780   14536 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0320 17:19:42.800792   14536 kubeadm.go:393] duration metric: took 4m14.315042s to StartCluster
	I0320 17:19:42.800802   14536 settings.go:142] acquiring lock: {Name:mkcbd6581de6bc632d1ec3032ba4410c2461597a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:19:42.800878   14536 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:19:42.801308   14536 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/kubeconfig: {Name:mk87d9a95abf455216479b90714c3c5b079a6861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:19:42.801511   14536 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:19:42.805616   14536 out.go:177] * Verifying Kubernetes components...
	I0320 17:19:42.801567   14536 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0320 17:19:42.801700   14536 config.go:182] Loaded profile config "running-upgrade-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0320 17:19:42.812636   14536 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-120000"
	I0320 17:19:42.812656   14536 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-120000"
	I0320 17:19:42.812694   14536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:19:42.812673   14536 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-120000"
	I0320 17:19:42.812740   14536 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-120000"
	W0320 17:19:42.812747   14536 addons.go:243] addon storage-provisioner should already be in state true
	I0320 17:19:42.812759   14536 host.go:66] Checking if "running-upgrade-120000" exists ...
	I0320 17:19:42.814178   14536 kapi.go:59] client config for running-upgrade-120000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/client.key", CAFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10427ea80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0320 17:19:42.814293   14536 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-120000"
	W0320 17:19:42.814299   14536 addons.go:243] addon default-storageclass should already be in state true
	I0320 17:19:42.814307   14536 host.go:66] Checking if "running-upgrade-120000" exists ...
	I0320 17:19:42.818573   14536 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:19:42.821483   14536 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0320 17:19:42.821490   14536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0320 17:19:42.821496   14536 sshutil.go:53] new ssh client: &{IP:localhost Port:52309 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/running-upgrade-120000/id_rsa Username:docker}
	I0320 17:19:42.822257   14536 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0320 17:19:42.822262   14536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0320 17:19:42.822266   14536 sshutil.go:53] new ssh client: &{IP:localhost Port:52309 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/running-upgrade-120000/id_rsa Username:docker}
	I0320 17:19:42.899831   14536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0320 17:19:42.905040   14536 api_server.go:52] waiting for apiserver process to appear ...
	I0320 17:19:42.905086   14536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:19:42.908680   14536 api_server.go:72] duration metric: took 107.164583ms to wait for apiserver process to appear ...
	I0320 17:19:42.908688   14536 api_server.go:88] waiting for apiserver healthz status ...
	I0320 17:19:42.908693   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:42.953011   14536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0320 17:19:42.955533   14536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0320 17:19:47.910619   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:47.910665   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:52.911098   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:52.911117   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:57.911322   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:57.911378   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:02.911790   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:02.911855   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:07.912381   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:07.912405   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:12.913086   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:12.913123   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0320 17:20:13.292331   14536 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0320 17:20:13.296538   14536 out.go:177] * Enabled addons: storage-provisioner
	I0320 17:20:13.304435   14536 addons.go:505] duration metric: took 30.504060416s for enable addons: enabled=[storage-provisioner]
	I0320 17:20:17.913667   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:17.913775   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:22.915042   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:22.915089   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:27.916663   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:27.916706   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:32.918682   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:32.918720   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:37.918998   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:37.919036   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:42.920156   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:42.920349   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:42.933008   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:20:42.933079   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:42.945047   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:20:42.945132   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:42.958654   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:20:42.958728   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:42.988483   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:20:42.988557   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:43.005442   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:20:43.005513   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:43.016753   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:20:43.016828   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:43.027659   14536 logs.go:276] 0 containers: []
	W0320 17:20:43.027674   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:43.027737   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:43.039355   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:20:43.039370   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:20:43.039375   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:20:43.055848   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:20:43.055860   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:20:43.070904   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:20:43.070914   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:20:43.083251   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:20:43.083263   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:20:43.095214   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:43.095225   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:43.118521   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:20:43.118531   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:20:43.130320   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:43.130331   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:43.164616   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:43.164625   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:43.169127   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:20:43.169137   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:20:43.181683   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:20:43.181693   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:20:43.197519   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:20:43.197528   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:20:43.217336   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:43.217348   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:43.290656   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:20:43.290668   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:20:45.804798   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:50.807042   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:50.807226   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:50.820238   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:20:50.820316   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:50.831335   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:20:50.831409   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:50.842202   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:20:50.842265   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:50.852781   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:20:50.852850   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:50.863808   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:20:50.863880   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:50.874990   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:20:50.875055   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:50.885600   14536 logs.go:276] 0 containers: []
	W0320 17:20:50.885615   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:50.885673   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:50.896786   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:20:50.896805   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:50.896811   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:50.930096   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:20:50.930106   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:20:50.942613   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:20:50.942626   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:20:50.961093   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:20:50.961103   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:20:50.974164   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:50.974178   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:50.998545   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:20:50.998558   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:20:51.010336   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:51.010348   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:51.015065   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:51.015073   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:51.053218   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:20:51.053229   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:20:51.068143   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:20:51.068154   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:20:51.082734   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:20:51.082745   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:20:51.094763   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:20:51.094780   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:20:51.112609   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:20:51.112619   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:20:53.626646   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:58.628928   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:58.629102   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:58.651435   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:20:58.651533   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:58.665688   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:20:58.665759   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:58.678059   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:20:58.678118   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:58.689432   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:20:58.689498   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:58.700922   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:20:58.701015   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:58.712133   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:20:58.712199   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:58.722958   14536 logs.go:276] 0 containers: []
	W0320 17:20:58.722970   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:58.723030   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:58.733550   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:20:58.733568   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:58.733573   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:58.738320   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:20:58.738328   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:20:58.752821   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:20:58.752835   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:20:58.764761   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:20:58.764771   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:20:58.776873   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:20:58.776883   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:20:58.788535   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:58.788548   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:58.811569   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:58.811577   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:58.844660   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:58.844668   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:58.880802   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:20:58.880817   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:20:58.895480   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:20:58.895493   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:20:58.907817   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:20:58.907828   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:20:58.923263   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:20:58.923274   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:20:58.946971   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:20:58.946982   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:01.460730   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:06.462035   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:06.462287   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:06.481028   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:21:06.481110   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:06.494974   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:21:06.495043   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:06.506249   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:21:06.506317   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:06.516427   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:21:06.516489   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:06.526318   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:21:06.526412   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:06.537364   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:21:06.537434   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:06.547440   14536 logs.go:276] 0 containers: []
	W0320 17:21:06.547451   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:06.547510   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:06.558982   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:21:06.558997   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:06.559002   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:06.592423   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:21:06.592436   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:21:06.606810   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:21:06.606822   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:21:06.621259   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:21:06.621268   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:21:06.633251   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:21:06.633261   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:21:06.647335   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:21:06.647350   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:06.658701   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:06.658713   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:06.663110   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:06.663116   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:06.700158   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:21:06.700172   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:21:06.718586   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:21:06.718597   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:21:06.729806   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:21:06.729817   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:21:06.741356   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:21:06.741368   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:21:06.758956   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:06.758970   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:09.285875   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:14.287909   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:14.288062   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:14.302729   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:21:14.302808   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:14.314408   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:21:14.314474   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:14.324983   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:21:14.325055   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:14.334998   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:21:14.335059   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:14.345333   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:21:14.345402   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:14.366509   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:21:14.366580   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:14.376511   14536 logs.go:276] 0 containers: []
	W0320 17:21:14.376524   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:14.376576   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:14.387101   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:21:14.387116   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:21:14.387122   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:21:14.400883   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:21:14.400897   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:21:14.412968   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:21:14.412980   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:21:14.427720   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:21:14.427730   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:21:14.439752   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:14.439765   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:14.464533   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:21:14.464547   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:21:14.483138   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:21:14.483149   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:14.494278   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:14.494290   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:14.528840   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:14.528852   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:14.533720   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:14.533727   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:14.575693   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:21:14.575704   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:21:14.591924   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:21:14.591937   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:21:14.604028   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:21:14.604042   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:21:17.117244   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:22.119390   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:22.119492   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:22.131423   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:21:22.131498   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:22.141944   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:21:22.142012   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:22.152247   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:21:22.152312   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:22.162367   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:21:22.162435   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:22.172714   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:21:22.172774   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:22.183540   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:21:22.183601   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:22.194233   14536 logs.go:276] 0 containers: []
	W0320 17:21:22.194243   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:22.194294   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:22.205350   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:21:22.205367   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:22.205372   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:22.239755   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:22.239764   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:22.244882   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:21:22.244889   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:21:22.260981   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:21:22.260991   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:21:22.272130   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:21:22.272158   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:21:22.286887   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:21:22.286898   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:21:22.298454   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:22.298463   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:22.321121   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:22.321134   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:22.358230   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:21:22.358244   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:21:22.372317   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:21:22.372327   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:21:22.384497   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:21:22.384509   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:21:22.396275   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:21:22.396286   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:21:22.413799   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:21:22.413809   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:24.926838   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:29.928176   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:29.928320   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:29.949284   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:21:29.949375   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:29.965151   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:21:29.965228   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:29.977734   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:21:29.977804   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:29.988518   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:21:29.988589   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:30.003764   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:21:30.003835   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:30.015222   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:21:30.015289   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:30.025490   14536 logs.go:276] 0 containers: []
	W0320 17:21:30.025502   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:30.025559   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:30.036271   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:21:30.036290   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:21:30.036295   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:21:30.051079   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:21:30.051088   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:21:30.062743   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:21:30.062754   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:21:30.073679   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:30.073690   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:30.097945   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:21:30.097953   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:30.109052   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:30.109062   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:30.142714   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:30.142730   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:30.147180   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:30.147188   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:30.182013   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:21:30.182027   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:21:30.199977   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:21:30.199989   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:21:30.211644   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:21:30.211656   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:21:30.228768   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:21:30.228780   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:21:30.240908   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:21:30.240921   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:21:32.757598   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:37.759599   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:37.759758   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:37.776353   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:21:37.776432   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:37.788981   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:21:37.789044   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:37.799914   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:21:37.799979   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:37.810635   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:21:37.810701   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:37.821721   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:21:37.821784   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:37.835656   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:21:37.835720   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:37.845867   14536 logs.go:276] 0 containers: []
	W0320 17:21:37.845878   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:37.845932   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:37.855788   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:21:37.855803   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:21:37.855808   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:21:37.868434   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:21:37.868445   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:21:37.880594   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:21:37.880606   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:21:37.898518   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:21:37.898528   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:21:37.913620   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:21:37.913631   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:21:37.927033   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:37.927046   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:37.964912   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:21:37.964925   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:21:37.977020   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:21:37.977031   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:21:37.992564   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:21:37.992577   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:21:38.004206   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:38.004220   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:38.027474   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:21:38.027484   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:38.038954   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:38.038969   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:38.072775   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:38.072786   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:40.579207   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:45.581366   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:45.581616   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:45.613367   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:21:45.613479   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:45.633987   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:21:45.634078   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:45.647335   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:21:45.647413   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:45.658855   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:21:45.658921   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:45.673533   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:21:45.673602   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:45.689737   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:21:45.689811   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:45.699754   14536 logs.go:276] 0 containers: []
	W0320 17:21:45.699768   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:45.699817   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:45.710618   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:21:45.710633   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:21:45.710638   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:21:45.722017   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:21:45.722027   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:21:45.736398   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:21:45.736410   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:21:45.748439   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:21:45.748449   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:21:45.772389   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:45.772400   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:45.797158   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:45.797166   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:45.832109   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:21:45.832121   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:21:45.852040   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:21:45.852052   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:21:45.863965   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:21:45.863976   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:45.875244   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:21:45.875258   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:21:45.887445   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:45.887456   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:45.921724   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:45.921731   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:45.926264   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:21:45.926269   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:21:48.441840   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:53.443931   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:53.444056   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:53.462316   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:21:53.462409   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:53.476386   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:21:53.476458   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:53.488586   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:21:53.488661   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:53.499268   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:21:53.499338   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:53.516255   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:21:53.516322   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:53.526553   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:21:53.526612   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:53.536881   14536 logs.go:276] 0 containers: []
	W0320 17:21:53.536891   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:53.536944   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:53.547384   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:21:53.547398   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:21:53.547403   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:21:53.564601   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:21:53.564610   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:21:53.576351   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:21:53.576362   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:21:53.589991   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:21:53.590002   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:21:53.604718   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:53.604729   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:53.642166   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:21:53.642177   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:21:53.659917   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:21:53.659927   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:21:53.671405   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:21:53.671416   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:21:53.683414   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:21:53.683424   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:21:53.695304   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:53.695314   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:53.719785   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:53.719793   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:53.753914   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:53.753925   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:53.758359   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:21:53.758365   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:56.272166   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:01.274178   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:01.274292   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:01.290089   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:01.290169   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:01.315054   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:01.315115   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:01.325791   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:01.325864   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:01.337003   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:01.337063   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:01.347184   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:01.347250   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:01.357333   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:01.357399   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:01.367228   14536 logs.go:276] 0 containers: []
	W0320 17:22:01.367239   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:01.367298   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:01.381383   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:01.381402   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:01.381408   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:01.393261   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:01.393272   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:01.406910   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:01.406924   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:01.424954   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:01.424974   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:01.440580   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:01.440593   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:01.452011   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:01.452021   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:01.464012   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:01.464025   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:01.468516   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:01.468523   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:01.487559   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:01.487569   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:01.498866   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:01.498880   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:01.510109   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:01.510119   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:01.545255   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:01.545264   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:22:01.558741   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:01.558751   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:01.574006   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:01.574015   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:01.610263   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:01.610275   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:04.136477   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:09.138559   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:09.138693   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:09.151579   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:09.151655   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:09.162104   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:09.162171   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:09.172820   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:09.172884   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:09.185635   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:09.185698   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:09.196364   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:09.196424   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:09.207017   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:09.207079   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:09.222362   14536 logs.go:276] 0 containers: []
	W0320 17:22:09.222373   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:09.222426   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:09.232931   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:09.232947   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:09.232953   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:09.244502   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:09.244514   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:09.256128   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:09.256143   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:09.267927   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:09.267940   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:09.279539   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:09.279550   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:09.303579   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:09.303588   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:09.340999   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:09.341028   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:09.361782   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:09.361793   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:09.373858   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:09.373868   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:09.408787   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:09.408796   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:22:09.423266   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:09.423276   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:09.428025   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:09.428032   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:09.451342   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:09.451352   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:09.463150   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:09.463166   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:09.475160   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:09.475172   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:11.995751   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:16.996149   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:16.996292   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:17.014394   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:17.014472   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:17.027047   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:17.027119   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:17.038681   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:17.038759   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:17.048746   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:17.048812   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:17.059556   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:17.059626   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:17.074148   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:17.074211   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:17.084228   14536 logs.go:276] 0 containers: []
	W0320 17:22:17.084241   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:17.084301   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:17.095137   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:17.095153   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:17.095159   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:17.106545   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:17.106555   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:17.117849   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:17.117859   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:17.135568   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:17.135579   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:22:17.149798   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:17.149807   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:17.174706   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:17.174714   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:17.210910   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:17.211308   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:17.226538   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:17.226548   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:17.238457   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:17.238468   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:17.250195   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:17.250205   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:17.265024   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:17.265035   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:17.298575   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:17.298584   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:17.303290   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:17.303298   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:17.314580   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:17.314591   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:17.329584   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:17.329594   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:19.843134   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:24.845217   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:24.845395   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:24.863757   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:24.863834   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:24.878392   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:24.878461   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:24.890605   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:24.890676   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:24.901280   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:24.901345   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:24.911659   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:24.911729   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:24.921901   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:24.921968   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:24.931852   14536 logs.go:276] 0 containers: []
	W0320 17:22:24.931864   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:24.931925   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:24.942207   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:24.942228   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:24.942234   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:24.947035   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:24.947042   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:24.962150   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:24.962159   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:24.974250   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:24.974260   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:24.999992   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:25.000010   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:25.037449   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:25.037467   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:25.052757   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:25.052766   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:25.068138   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:25.068148   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:25.080705   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:25.080724   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:25.098727   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:25.098736   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:25.111435   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:25.111446   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:25.146272   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:25.146283   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:22:25.167845   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:25.167854   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:25.179697   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:25.179708   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:25.192284   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:25.192295   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:27.711003   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:32.713165   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:32.713356   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:32.733525   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:32.733595   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:32.753568   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:32.753643   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:32.765055   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:32.765122   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:32.775493   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:32.775549   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:32.791974   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:32.792037   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:32.803798   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:32.803866   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:32.814206   14536 logs.go:276] 0 containers: []
	W0320 17:22:32.814218   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:32.814268   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:32.824445   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:32.824461   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:32.824466   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:32.859075   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:32.859089   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:32.873139   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:32.873155   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:32.884541   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:32.884554   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:32.896112   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:32.896122   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:32.900356   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:32.900362   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:32.935605   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:32.935616   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:22:32.950948   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:32.950959   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:32.962524   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:32.962533   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:32.978031   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:32.978041   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:32.996714   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:32.996725   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:33.020281   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:33.020289   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:33.031405   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:33.031416   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:33.043268   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:33.043278   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:33.054821   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:33.054836   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:35.568267   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:40.570766   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:40.571014   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:40.603533   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:40.603631   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:40.620196   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:40.620276   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:40.632916   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:40.632987   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:40.644071   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:40.644141   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:40.654847   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:40.654906   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:40.665440   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:40.665506   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:40.675339   14536 logs.go:276] 0 containers: []
	W0320 17:22:40.675349   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:40.675395   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:40.688814   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:40.688836   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:40.688841   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:40.704856   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:40.704867   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:40.723501   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:40.723511   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:40.746346   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:40.746353   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:40.760746   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:40.760755   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:40.772837   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:40.772850   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:40.789142   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:40.789154   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:40.801316   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:40.801327   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:40.813028   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:40.813038   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:40.853320   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:40.853329   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:40.857919   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:40.857928   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:40.892810   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:40.892822   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:40.905656   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:40.905667   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:40.917864   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:40.917875   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:22:40.937744   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:40.937754   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:43.452011   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:48.454110   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:48.454262   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:48.468625   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:48.468705   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:48.480709   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:48.480774   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:48.491508   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:48.491578   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:48.502125   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:48.502180   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:48.511910   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:48.511972   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:48.526838   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:48.526908   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:48.536940   14536 logs.go:276] 0 containers: []
	W0320 17:22:48.536950   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:48.537004   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:48.551635   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:48.551653   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:48.551659   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:48.570968   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:48.570979   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:48.596360   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:48.596380   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:48.601363   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:48.601370   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:48.639083   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:48.639099   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:22:48.653744   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:48.653754   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:48.665460   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:48.665469   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:48.677584   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:48.677597   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:48.692194   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:48.692207   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:48.704128   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:48.704143   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:48.716108   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:48.716117   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:48.733914   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:48.733928   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:48.768847   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:48.768855   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:48.780791   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:48.780805   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:48.792453   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:48.792462   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:51.307031   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:56.309101   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:56.309214   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:56.321293   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:56.321368   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:56.331686   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:56.331744   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:56.342034   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:56.342105   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:56.352399   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:56.352456   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:56.362908   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:56.362972   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:56.373411   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:56.373477   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:56.386796   14536 logs.go:276] 0 containers: []
	W0320 17:22:56.386808   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:56.386861   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:56.399716   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:56.399734   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:56.399739   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:56.412181   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:56.412194   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:56.427693   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:56.427704   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:56.440944   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:56.440957   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:56.455920   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:56.455931   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:56.467466   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:56.467477   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:56.501873   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:56.501883   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:56.537112   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:56.537124   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:56.549378   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:56.549391   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:56.572295   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:56.572302   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:56.576948   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:56.576957   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:56.602407   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:56.602421   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:56.619734   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:56.619748   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:56.637244   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:56.637254   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:56.649463   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:56.649475   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:22:59.165671   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:04.167903   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:04.168182   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:04.201651   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:23:04.201753   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:04.220201   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:23:04.220283   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:04.234382   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:23:04.234472   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:04.245935   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:23:04.246008   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:04.256576   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:23:04.256644   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:04.267375   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:23:04.267436   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:04.278552   14536 logs.go:276] 0 containers: []
	W0320 17:23:04.278565   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:04.278625   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:04.289698   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:23:04.289716   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:23:04.289721   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:04.301403   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:04.301414   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:04.336183   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:23:04.336192   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:23:04.348162   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:23:04.348172   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:23:04.359522   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:23:04.359533   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:23:04.370901   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:23:04.370915   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:23:04.382535   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:23:04.382545   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:23:04.394204   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:23:04.394216   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:23:04.413236   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:23:04.413248   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:23:04.429958   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:23:04.429969   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:23:04.443763   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:23:04.443774   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:23:04.457039   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:04.457056   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:04.461799   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:04.461805   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:23:04.499295   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:04.499310   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:04.523085   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:23:04.523091   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:23:07.040193   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:12.042300   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:12.042569   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:12.067628   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:23:12.067716   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:12.085791   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:23:12.085864   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:12.098721   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:23:12.098794   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:12.109861   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:23:12.109925   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:12.120198   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:23:12.120266   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:12.131280   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:23:12.131345   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:12.141295   14536 logs.go:276] 0 containers: []
	W0320 17:23:12.141306   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:12.141360   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:12.151290   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:23:12.151311   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:23:12.151315   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:23:12.162692   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:23:12.162701   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:23:12.174142   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:23:12.174152   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:23:12.191575   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:12.191584   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:12.216159   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:23:12.216168   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:23:12.230386   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:23:12.230401   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:23:12.242052   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:12.242065   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:12.246438   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:12.246448   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:23:12.281941   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:23:12.281954   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:23:12.300114   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:23:12.300125   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:23:12.311823   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:23:12.311836   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:23:12.332118   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:12.332129   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:12.366208   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:23:12.366219   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:23:12.382930   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:23:12.382943   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:23:12.394419   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:23:12.394434   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:14.911301   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:19.913451   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:19.913615   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:19.929165   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:23:19.929247   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:19.942152   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:23:19.942231   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:19.953437   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:23:19.953512   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:19.963996   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:23:19.964060   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:19.975185   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:23:19.975258   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:19.985707   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:23:19.985774   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:19.999069   14536 logs.go:276] 0 containers: []
	W0320 17:23:19.999080   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:19.999135   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:20.009284   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:23:20.009298   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:20.009303   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:20.044522   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:23:20.044537   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:23:20.073140   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:23:20.073152   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:23:20.085635   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:20.085647   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:20.090221   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:23:20.090228   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:23:20.106093   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:23:20.106104   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:23:20.121888   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:20.121901   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:20.145235   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:23:20.145243   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:20.156490   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:23:20.156504   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:23:20.170965   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:23:20.170975   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:23:20.182556   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:23:20.182569   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:23:20.195880   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:23:20.195889   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:23:20.207682   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:20.207698   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:23:20.242828   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:23:20.242838   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:23:20.255959   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:23:20.255969   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:23:22.774074   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:27.776079   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:27.776187   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:27.794406   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:23:27.794505   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:27.805069   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:23:27.805134   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:27.815902   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:23:27.815971   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:27.830765   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:23:27.830824   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:27.842888   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:23:27.842959   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:27.853445   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:23:27.853503   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:27.863761   14536 logs.go:276] 0 containers: []
	W0320 17:23:27.863770   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:27.863822   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:27.873998   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:23:27.874020   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:23:27.874025   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:23:27.886499   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:23:27.886509   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:23:27.903192   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:23:27.903204   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:23:27.917495   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:23:27.917507   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:23:27.931536   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:23:27.931550   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:23:27.949517   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:27.949528   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:27.954297   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:27.954305   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:23:27.990406   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:23:27.990417   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:23:28.004438   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:23:28.004447   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:23:28.015739   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:23:28.015752   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:23:28.027149   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:23:28.027160   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:23:28.042921   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:28.042934   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:28.065671   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:28.065678   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:28.098908   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:23:28.098917   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:28.110243   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:23:28.110255   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:23:30.623916   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:35.626012   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:35.626197   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:35.647437   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:23:35.647520   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:35.661139   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:23:35.661198   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:35.673368   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:23:35.673435   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:35.683517   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:23:35.683574   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:35.694212   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:23:35.694282   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:35.704848   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:23:35.704914   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:35.714369   14536 logs.go:276] 0 containers: []
	W0320 17:23:35.714380   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:35.714434   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:35.724473   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:23:35.724490   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:35.724494   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:23:35.759951   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:23:35.759962   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:23:35.772866   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:23:35.772877   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:23:35.788197   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:23:35.788208   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:23:35.802368   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:23:35.802381   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:23:35.813210   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:23:35.813224   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:23:35.824712   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:23:35.824725   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:23:35.837690   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:23:35.837702   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:23:35.855301   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:35.855311   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:35.879336   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:23:35.879346   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:35.891319   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:35.891330   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:35.895767   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:23:35.895774   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:23:35.907260   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:23:35.907272   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:23:35.919228   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:23:35.919238   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:23:35.934109   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:35.934120   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:38.471992   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:43.474029   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:43.476934   14536 out.go:177] 
	W0320 17:23:43.481372   14536 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0320 17:23:43.481379   14536 out.go:239] * 
	* 
	W0320 17:23:43.481827   14536 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:23:43.494327   14536 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:132: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p running-upgrade-120000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
panic.go:626: *** TestRunningBinaryUpgrade FAILED at 2024-03-20 17:23:43.569595 -0700 PDT m=+1409.819967543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-120000 -n running-upgrade-120000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p running-upgrade-120000 -n running-upgrade-120000: exit status 2 (15.637945s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestRunningBinaryUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestRunningBinaryUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-arm64 -p running-upgrade-120000 logs -n 25
helpers_test.go:252: TestRunningBinaryUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-670000 sudo                                | cilium-670000             | jenkins | v1.32.0 | 20 Mar 24 17:12 PDT |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-670000 sudo cat                            | cilium-670000             | jenkins | v1.32.0 | 20 Mar 24 17:12 PDT |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-670000 sudo cat                            | cilium-670000             | jenkins | v1.32.0 | 20 Mar 24 17:12 PDT |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-670000 sudo                                | cilium-670000             | jenkins | v1.32.0 | 20 Mar 24 17:12 PDT |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-670000 sudo                                | cilium-670000             | jenkins | v1.32.0 | 20 Mar 24 17:12 PDT |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-670000 sudo                                | cilium-670000             | jenkins | v1.32.0 | 20 Mar 24 17:12 PDT |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-670000 sudo cat                            | cilium-670000             | jenkins | v1.32.0 | 20 Mar 24 17:12 PDT |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-670000 sudo cat                            | cilium-670000             | jenkins | v1.32.0 | 20 Mar 24 17:12 PDT |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-670000 sudo                                | cilium-670000             | jenkins | v1.32.0 | 20 Mar 24 17:12 PDT |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-670000 sudo                                | cilium-670000             | jenkins | v1.32.0 | 20 Mar 24 17:12 PDT |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-670000 sudo                                | cilium-670000             | jenkins | v1.32.0 | 20 Mar 24 17:12 PDT |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-670000 sudo find                           | cilium-670000             | jenkins | v1.32.0 | 20 Mar 24 17:12 PDT |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-670000 sudo crio                           | cilium-670000             | jenkins | v1.32.0 | 20 Mar 24 17:12 PDT |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-670000                                     | cilium-670000             | jenkins | v1.32.0 | 20 Mar 24 17:12 PDT | 20 Mar 24 17:12 PDT |
	| start   | -p kubernetes-upgrade-990000                         | kubernetes-upgrade-990000 | jenkins | v1.32.0 | 20 Mar 24 17:12 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p offline-docker-761000                             | offline-docker-761000     | jenkins | v1.32.0 | 20 Mar 24 17:13 PDT | 20 Mar 24 17:13 PDT |
	| stop    | -p kubernetes-upgrade-990000                         | kubernetes-upgrade-990000 | jenkins | v1.32.0 | 20 Mar 24 17:13 PDT | 20 Mar 24 17:13 PDT |
	| start   | -p kubernetes-upgrade-990000                         | kubernetes-upgrade-990000 | jenkins | v1.32.0 | 20 Mar 24 17:13 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0                  |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-184000                            | minikube                  | jenkins | v1.26.0 | 20 Mar 24 17:13 PDT | 20 Mar 24 17:14 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-990000                         | kubernetes-upgrade-990000 | jenkins | v1.32.0 | 20 Mar 24 17:13 PDT | 20 Mar 24 17:13 PDT |
	| start   | -p running-upgrade-120000                            | minikube                  | jenkins | v1.26.0 | 20 Mar 24 17:13 PDT | 20 Mar 24 17:14 PDT |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --vm-driver=qemu2                                    |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-184000 stop                          | minikube                  | jenkins | v1.26.0 | 20 Mar 24 17:14 PDT | 20 Mar 24 17:14 PDT |
	| start   | -p stopped-upgrade-184000                            | stopped-upgrade-184000    | jenkins | v1.32.0 | 20 Mar 24 17:14 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| start   | -p running-upgrade-120000                            | running-upgrade-120000    | jenkins | v1.32.0 | 20 Mar 24 17:14 PDT |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                               |                           |         |         |                     |                     |
	|         | --driver=qemu2                                       |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-184000                            | stopped-upgrade-184000    | jenkins | v1.32.0 | 20 Mar 24 17:23 PDT |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/20 17:14:52
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0320 17:14:52.222384   14536 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:14:52.222524   14536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:14:52.222530   14536 out.go:304] Setting ErrFile to fd 2...
	I0320 17:14:52.222533   14536 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:14:52.222666   14536 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:14:52.223804   14536 out.go:298] Setting JSON to false
	I0320 17:14:52.241155   14536 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8062,"bootTime":1710972030,"procs":520,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:14:52.241216   14536 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:14:52.245617   14536 out.go:177] * [running-upgrade-120000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:14:52.252680   14536 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:14:52.256619   14536 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:14:52.252728   14536 notify.go:220] Checking for updates...
	I0320 17:14:52.264724   14536 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:14:52.268605   14536 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:14:52.271734   14536 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:14:52.274741   14536 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:14:52.278006   14536 config.go:182] Loaded profile config "running-upgrade-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0320 17:14:52.281686   14536 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0320 17:14:52.284731   14536 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:14:52.287661   14536 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:14:52.294679   14536 start.go:297] selected driver: qemu2
	I0320 17:14:52.294684   14536 start.go:901] validating driver "qemu2" against &{Name:running-upgrade-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52377 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgra
de-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0320 17:14:52.294730   14536 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:14:52.296936   14536 cni.go:84] Creating CNI manager for ""
	I0320 17:14:52.296952   14536 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:14:52.296976   14536 start.go:340] cluster config:
	{Name:running-upgrade-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52377 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0320 17:14:52.297025   14536 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:14:52.305713   14536 out.go:177] * Starting "running-upgrade-120000" primary control-plane node in "running-upgrade-120000" cluster
	I0320 17:14:52.309679   14536 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0320 17:14:52.309693   14536 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0320 17:14:52.309697   14536 cache.go:56] Caching tarball of preloaded images
	I0320 17:14:52.309735   14536 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:14:52.309739   14536 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0320 17:14:52.309790   14536 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/config.json ...
	I0320 17:14:52.310124   14536 start.go:360] acquireMachinesLock for running-upgrade-120000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:15:03.655745   14536 start.go:364] duration metric: took 11.346047209s to acquireMachinesLock for "running-upgrade-120000"
	I0320 17:15:03.655775   14536 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:15:03.655781   14536 fix.go:54] fixHost starting: 
	I0320 17:15:03.656732   14536 fix.go:112] recreateIfNeeded on running-upgrade-120000: state=Running err=<nil>
	W0320 17:15:03.656740   14536 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:15:03.661226   14536 out.go:177] * Updating the running qemu2 "running-upgrade-120000" VM ...
	I0320 17:15:02.700458   14525 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/config.json ...
	I0320 17:15:02.700906   14525 machine.go:94] provisionDockerMachine start ...
	I0320 17:15:02.700977   14525 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:02.701256   14525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104631bf0] 0x104634450 <nil>  [] 0s} localhost 52272 <nil> <nil>}
	I0320 17:15:02.701265   14525 main.go:141] libmachine: About to run SSH command:
	hostname
	I0320 17:15:02.784780   14525 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0320 17:15:02.784832   14525 buildroot.go:166] provisioning hostname "stopped-upgrade-184000"
	I0320 17:15:02.784958   14525 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:02.785143   14525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104631bf0] 0x104634450 <nil>  [] 0s} localhost 52272 <nil> <nil>}
	I0320 17:15:02.785152   14525 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-184000 && echo "stopped-upgrade-184000" | sudo tee /etc/hostname
	I0320 17:15:02.859961   14525 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-184000
	
	I0320 17:15:02.860051   14525 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:02.860179   14525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104631bf0] 0x104634450 <nil>  [] 0s} localhost 52272 <nil> <nil>}
	I0320 17:15:02.860187   14525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-184000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-184000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-184000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0320 17:15:02.929789   14525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0320 17:15:02.929804   14525 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17713-12248/.minikube CaCertPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17713-12248/.minikube}
	I0320 17:15:02.929817   14525 buildroot.go:174] setting up certificates
	I0320 17:15:02.929821   14525 provision.go:84] configureAuth start
	I0320 17:15:02.929842   14525 provision.go:143] copyHostCerts
	I0320 17:15:02.929923   14525 exec_runner.go:144] found /Users/jenkins/minikube-integration/17713-12248/.minikube/key.pem, removing ...
	I0320 17:15:02.929928   14525 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17713-12248/.minikube/key.pem
	I0320 17:15:02.930025   14525 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17713-12248/.minikube/key.pem (1679 bytes)
	I0320 17:15:02.930952   14525 exec_runner.go:144] found /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.pem, removing ...
	I0320 17:15:02.930958   14525 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.pem
	I0320 17:15:02.931016   14525 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.pem (1082 bytes)
	I0320 17:15:02.931134   14525 exec_runner.go:144] found /Users/jenkins/minikube-integration/17713-12248/.minikube/cert.pem, removing ...
	I0320 17:15:02.931138   14525 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17713-12248/.minikube/cert.pem
	I0320 17:15:02.931177   14525 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17713-12248/.minikube/cert.pem (1123 bytes)
	I0320 17:15:02.931278   14525 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-184000 san=[127.0.0.1 localhost minikube stopped-upgrade-184000]
	I0320 17:15:02.980068   14525 provision.go:177] copyRemoteCerts
	I0320 17:15:02.980140   14525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0320 17:15:02.980150   14525 sshutil.go:53] new ssh client: &{IP:localhost Port:52272 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/id_rsa Username:docker}
	I0320 17:15:03.017385   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0320 17:15:03.024620   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0320 17:15:03.032135   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0320 17:15:03.039245   14525 provision.go:87] duration metric: took 109.418875ms to configureAuth
	I0320 17:15:03.039255   14525 buildroot.go:189] setting minikube options for container-runtime
	I0320 17:15:03.039360   14525 config.go:182] Loaded profile config "stopped-upgrade-184000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0320 17:15:03.039401   14525 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:03.039489   14525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104631bf0] 0x104634450 <nil>  [] 0s} localhost 52272 <nil> <nil>}
	I0320 17:15:03.039494   14525 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0320 17:15:03.108134   14525 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0320 17:15:03.108145   14525 buildroot.go:70] root file system type: tmpfs
	I0320 17:15:03.108207   14525 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0320 17:15:03.108260   14525 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:03.108371   14525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104631bf0] 0x104634450 <nil>  [] 0s} localhost 52272 <nil> <nil>}
	I0320 17:15:03.108404   14525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0320 17:15:03.178872   14525 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0320 17:15:03.178925   14525 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:03.179037   14525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104631bf0] 0x104634450 <nil>  [] 0s} localhost 52272 <nil> <nil>}
	I0320 17:15:03.179046   14525 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0320 17:15:03.540367   14525 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0320 17:15:03.540383   14525 machine.go:97] duration metric: took 839.501541ms to provisionDockerMachine
	I0320 17:15:03.540390   14525 start.go:293] postStartSetup for "stopped-upgrade-184000" (driver="qemu2")
	I0320 17:15:03.540397   14525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0320 17:15:03.540484   14525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0320 17:15:03.540497   14525 sshutil.go:53] new ssh client: &{IP:localhost Port:52272 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/id_rsa Username:docker}
	I0320 17:15:03.577640   14525 ssh_runner.go:195] Run: cat /etc/os-release
	I0320 17:15:03.578944   14525 info.go:137] Remote host: Buildroot 2021.02.12
	I0320 17:15:03.578951   14525 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17713-12248/.minikube/addons for local assets ...
	I0320 17:15:03.579015   14525 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17713-12248/.minikube/files for local assets ...
	I0320 17:15:03.579107   14525 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem -> 126652.pem in /etc/ssl/certs
	I0320 17:15:03.579194   14525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0320 17:15:03.581852   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem --> /etc/ssl/certs/126652.pem (1708 bytes)
	I0320 17:15:03.588725   14525 start.go:296] duration metric: took 48.329208ms for postStartSetup
	I0320 17:15:03.588747   14525 fix.go:56] duration metric: took 21.75475225s for fixHost
	I0320 17:15:03.588797   14525 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:03.588910   14525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104631bf0] 0x104634450 <nil>  [] 0s} localhost 52272 <nil> <nil>}
	I0320 17:15:03.588916   14525 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0320 17:15:03.655671   14525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710980103.725653129
	
	I0320 17:15:03.655682   14525 fix.go:216] guest clock: 1710980103.725653129
	I0320 17:15:03.655686   14525 fix.go:229] Guest: 2024-03-20 17:15:03.725653129 -0700 PDT Remote: 2024-03-20 17:15:03.58875 -0700 PDT m=+21.942645876 (delta=136.903129ms)
	I0320 17:15:03.655699   14525 fix.go:200] guest clock delta is within tolerance: 136.903129ms
	I0320 17:15:03.655701   14525 start.go:83] releasing machines lock for "stopped-upgrade-184000", held for 21.821728833s
	I0320 17:15:03.655787   14525 ssh_runner.go:195] Run: cat /version.json
	I0320 17:15:03.655796   14525 sshutil.go:53] new ssh client: &{IP:localhost Port:52272 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/id_rsa Username:docker}
	I0320 17:15:03.655870   14525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0320 17:15:03.656191   14525 sshutil.go:53] new ssh client: &{IP:localhost Port:52272 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/id_rsa Username:docker}
	W0320 17:15:03.690822   14525 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0320 17:15:03.690899   14525 ssh_runner.go:195] Run: systemctl --version
	I0320 17:15:03.733070   14525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0320 17:15:03.734841   14525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0320 17:15:03.734868   14525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0320 17:15:03.737943   14525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0320 17:15:03.743495   14525 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0320 17:15:03.743509   14525 start.go:494] detecting cgroup driver to use...
	I0320 17:15:03.744395   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0320 17:15:03.750746   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0320 17:15:03.754142   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0320 17:15:03.757334   14525 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0320 17:15:03.757369   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0320 17:15:03.760743   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0320 17:15:03.763622   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0320 17:15:03.766508   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0320 17:15:03.769791   14525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0320 17:15:03.773003   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0320 17:15:03.776020   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0320 17:15:03.779035   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0320 17:15:03.782264   14525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0320 17:15:03.785772   14525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0320 17:15:03.789144   14525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:03.852015   14525 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0320 17:15:03.859992   14525 start.go:494] detecting cgroup driver to use...
	I0320 17:15:03.860071   14525 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0320 17:15:03.865276   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0320 17:15:03.870568   14525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0320 17:15:03.877353   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0320 17:15:03.882789   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0320 17:15:03.888863   14525 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0320 17:15:03.937602   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0320 17:15:03.942951   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0320 17:15:03.949082   14525 ssh_runner.go:195] Run: which cri-dockerd
	I0320 17:15:03.950304   14525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0320 17:15:03.953447   14525 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0320 17:15:03.959219   14525 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0320 17:15:04.026641   14525 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0320 17:15:04.091720   14525 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0320 17:15:04.091793   14525 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0320 17:15:04.097244   14525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:04.161434   14525 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0320 17:15:05.291480   14525 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.130067541s)
	I0320 17:15:05.291559   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0320 17:15:05.298851   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0320 17:15:05.303441   14525 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0320 17:15:05.369856   14525 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0320 17:15:05.434544   14525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:05.496683   14525 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0320 17:15:05.502602   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0320 17:15:05.507623   14525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:05.567205   14525 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0320 17:15:05.605341   14525 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0320 17:15:05.605416   14525 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0320 17:15:05.607606   14525 start.go:562] Will wait 60s for crictl version
	I0320 17:15:05.607662   14525 ssh_runner.go:195] Run: which crictl
	I0320 17:15:05.609003   14525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0320 17:15:05.623643   14525 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0320 17:15:05.623702   14525 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0320 17:15:05.639906   14525 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0320 17:15:05.659635   14525 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0320 17:15:05.659758   14525 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0320 17:15:05.661167   14525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0320 17:15:05.665471   14525 kubeadm.go:877] updating cluster {Name:stopped-upgrade-184000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52304 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0320 17:15:05.665515   14525 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0320 17:15:05.665569   14525 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0320 17:15:05.676230   14525 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0320 17:15:05.676238   14525 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0320 17:15:05.676280   14525 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0320 17:15:05.679153   14525 ssh_runner.go:195] Run: which lz4
	I0320 17:15:05.680409   14525 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0320 17:15:05.681686   14525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0320 17:15:05.681698   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0320 17:15:06.423639   14525 docker.go:649] duration metric: took 743.286625ms to copy over tarball
	I0320 17:15:06.423698   14525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0320 17:15:03.669197   14536 machine.go:94] provisionDockerMachine start ...
	I0320 17:15:03.669242   14536 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:03.669352   14536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f8dbf0] 0x102f90450 <nil>  [] 0s} localhost 52309 <nil> <nil>}
	I0320 17:15:03.669357   14536 main.go:141] libmachine: About to run SSH command:
	hostname
	I0320 17:15:03.725734   14536 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-120000
	
	I0320 17:15:03.725750   14536 buildroot.go:166] provisioning hostname "running-upgrade-120000"
	I0320 17:15:03.725801   14536 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:03.725950   14536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f8dbf0] 0x102f90450 <nil>  [] 0s} localhost 52309 <nil> <nil>}
	I0320 17:15:03.725956   14536 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-120000 && echo "running-upgrade-120000" | sudo tee /etc/hostname
	I0320 17:15:03.784055   14536 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-120000
	
	I0320 17:15:03.784097   14536 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:03.784213   14536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f8dbf0] 0x102f90450 <nil>  [] 0s} localhost 52309 <nil> <nil>}
	I0320 17:15:03.784222   14536 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-120000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-120000/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-120000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0320 17:15:03.848951   14536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0320 17:15:03.848964   14536 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17713-12248/.minikube CaCertPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17713-12248/.minikube}
	I0320 17:15:03.848972   14536 buildroot.go:174] setting up certificates
	I0320 17:15:03.848977   14536 provision.go:84] configureAuth start
	I0320 17:15:03.848986   14536 provision.go:143] copyHostCerts
	I0320 17:15:03.849063   14536 exec_runner.go:144] found /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.pem, removing ...
	I0320 17:15:03.849071   14536 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.pem
	I0320 17:15:03.849183   14536 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.pem (1082 bytes)
	I0320 17:15:03.849377   14536 exec_runner.go:144] found /Users/jenkins/minikube-integration/17713-12248/.minikube/cert.pem, removing ...
	I0320 17:15:03.849381   14536 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17713-12248/.minikube/cert.pem
	I0320 17:15:03.849430   14536 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17713-12248/.minikube/cert.pem (1123 bytes)
	I0320 17:15:03.849537   14536 exec_runner.go:144] found /Users/jenkins/minikube-integration/17713-12248/.minikube/key.pem, removing ...
	I0320 17:15:03.849540   14536 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17713-12248/.minikube/key.pem
	I0320 17:15:03.849576   14536 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17713-12248/.minikube/key.pem (1679 bytes)
	I0320 17:15:03.849670   14536 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-120000 san=[127.0.0.1 localhost minikube running-upgrade-120000]
	I0320 17:15:04.006968   14536 provision.go:177] copyRemoteCerts
	I0320 17:15:04.007016   14536 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0320 17:15:04.007025   14536 sshutil.go:53] new ssh client: &{IP:localhost Port:52309 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/running-upgrade-120000/id_rsa Username:docker}
	I0320 17:15:04.038085   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0320 17:15:04.044415   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0320 17:15:04.052048   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0320 17:15:04.060029   14536 provision.go:87] duration metric: took 211.053208ms to configureAuth
	I0320 17:15:04.060042   14536 buildroot.go:189] setting minikube options for container-runtime
	I0320 17:15:04.060145   14536 config.go:182] Loaded profile config "running-upgrade-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0320 17:15:04.060188   14536 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:04.060279   14536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f8dbf0] 0x102f90450 <nil>  [] 0s} localhost 52309 <nil> <nil>}
	I0320 17:15:04.060284   14536 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0320 17:15:04.117885   14536 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0320 17:15:04.117897   14536 buildroot.go:70] root file system type: tmpfs
	I0320 17:15:04.117948   14536 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0320 17:15:04.118004   14536 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:04.118118   14536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f8dbf0] 0x102f90450 <nil>  [] 0s} localhost 52309 <nil> <nil>}
	I0320 17:15:04.118150   14536 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0320 17:15:04.178915   14536 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0320 17:15:04.178969   14536 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:04.179891   14536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f8dbf0] 0x102f90450 <nil>  [] 0s} localhost 52309 <nil> <nil>}
	I0320 17:15:04.179905   14536 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0320 17:15:04.240108   14536 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0320 17:15:04.240120   14536 machine.go:97] duration metric: took 570.938875ms to provisionDockerMachine
	I0320 17:15:04.240126   14536 start.go:293] postStartSetup for "running-upgrade-120000" (driver="qemu2")
	I0320 17:15:04.240132   14536 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0320 17:15:04.240189   14536 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0320 17:15:04.240198   14536 sshutil.go:53] new ssh client: &{IP:localhost Port:52309 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/running-upgrade-120000/id_rsa Username:docker}
	I0320 17:15:04.275900   14536 ssh_runner.go:195] Run: cat /etc/os-release
	I0320 17:15:04.277415   14536 info.go:137] Remote host: Buildroot 2021.02.12
	I0320 17:15:04.277422   14536 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17713-12248/.minikube/addons for local assets ...
	I0320 17:15:04.277490   14536 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17713-12248/.minikube/files for local assets ...
	I0320 17:15:04.277581   14536 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem -> 126652.pem in /etc/ssl/certs
	I0320 17:15:04.277670   14536 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0320 17:15:04.280349   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem --> /etc/ssl/certs/126652.pem (1708 bytes)
	I0320 17:15:04.286757   14536 start.go:296] duration metric: took 46.628792ms for postStartSetup
	I0320 17:15:04.286771   14536 fix.go:56] duration metric: took 631.01725ms for fixHost
	I0320 17:15:04.286807   14536 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:04.286903   14536 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x102f8dbf0] 0x102f90450 <nil>  [] 0s} localhost 52309 <nil> <nil>}
	I0320 17:15:04.286907   14536 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0320 17:15:04.341548   14536 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710980104.436124545
	
	I0320 17:15:04.341555   14536 fix.go:216] guest clock: 1710980104.436124545
	I0320 17:15:04.341559   14536 fix.go:229] Guest: 2024-03-20 17:15:04.436124545 -0700 PDT Remote: 2024-03-20 17:15:04.286775 -0700 PDT m=+12.088271293 (delta=149.349545ms)
	I0320 17:15:04.341570   14536 fix.go:200] guest clock delta is within tolerance: 149.349545ms
	I0320 17:15:04.341573   14536 start.go:83] releasing machines lock for "running-upgrade-120000", held for 685.844917ms
	I0320 17:15:04.341629   14536 ssh_runner.go:195] Run: cat /version.json
	I0320 17:15:04.341640   14536 sshutil.go:53] new ssh client: &{IP:localhost Port:52309 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/running-upgrade-120000/id_rsa Username:docker}
	I0320 17:15:04.341629   14536 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0320 17:15:04.341713   14536 sshutil.go:53] new ssh client: &{IP:localhost Port:52309 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/running-upgrade-120000/id_rsa Username:docker}
	W0320 17:15:04.342173   14536 sshutil.go:64] dial failure (will retry): ssh: handshake failed: write tcp 127.0.0.1:52506->127.0.0.1:52309: write: broken pipe
	I0320 17:15:04.342185   14536 retry.go:31] will retry after 186.622111ms: ssh: handshake failed: write tcp 127.0.0.1:52506->127.0.0.1:52309: write: broken pipe
	W0320 17:15:04.368822   14536 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0320 17:15:04.368875   14536 ssh_runner.go:195] Run: systemctl --version
	I0320 17:15:04.370606   14536 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0320 17:15:04.372304   14536 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0320 17:15:04.372335   14536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0320 17:15:04.375218   14536 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0320 17:15:04.379531   14536 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0320 17:15:04.379540   14536 start.go:494] detecting cgroup driver to use...
	I0320 17:15:04.379605   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0320 17:15:04.384417   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0320 17:15:04.388440   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0320 17:15:04.395922   14536 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0320 17:15:04.395982   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0320 17:15:04.400371   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0320 17:15:04.404516   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0320 17:15:04.407658   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0320 17:15:04.410726   14536 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0320 17:15:04.413713   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0320 17:15:04.416932   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0320 17:15:04.420736   14536 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0320 17:15:04.424106   14536 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0320 17:15:04.426808   14536 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0320 17:15:04.429512   14536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:04.540961   14536 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0320 17:15:04.553724   14536 start.go:494] detecting cgroup driver to use...
	I0320 17:15:04.553803   14536 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0320 17:15:04.569012   14536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0320 17:15:04.616107   14536 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0320 17:15:04.643724   14536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0320 17:15:04.648609   14536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0320 17:15:04.652883   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0320 17:15:04.658594   14536 ssh_runner.go:195] Run: which cri-dockerd
	I0320 17:15:04.659912   14536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0320 17:15:04.662856   14536 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0320 17:15:04.667851   14536 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0320 17:15:04.771332   14536 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0320 17:15:04.877377   14536 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0320 17:15:04.877440   14536 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0320 17:15:04.885679   14536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:04.986936   14536 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0320 17:15:07.581148   14525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.157475s)
	I0320 17:15:07.581163   14525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0320 17:15:07.597482   14525 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0320 17:15:07.601585   14525 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0320 17:15:07.606734   14525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:07.669991   14525 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0320 17:15:09.394510   14525 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.724569375s)
	I0320 17:15:09.394811   14525 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0320 17:15:09.406544   14525 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0320 17:15:09.406556   14525 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0320 17:15:09.406561   14525 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0320 17:15:09.415135   14525 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0320 17:15:09.415146   14525 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:09.415221   14525 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:09.415223   14525 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:09.415274   14525 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:09.415275   14525 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:09.415315   14525 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:09.415135   14525 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:09.421789   14525 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:09.421871   14525 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:09.421975   14525 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:09.421988   14525 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:09.422118   14525 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:09.422213   14525 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0320 17:15:09.422204   14525 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:09.422293   14525 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	W0320 17:15:11.391245   14525 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0320 17:15:11.391530   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:11.412708   14525 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0320 17:15:11.413810   14525 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:11.413885   14525 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:11.427642   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0320 17:15:11.427762   14525 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0320 17:15:11.429577   14525 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0320 17:15:11.429594   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0320 17:15:11.465256   14525 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0320 17:15:11.465270   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0320 17:15:11.488168   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0320 17:15:11.507936   14525 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0320 17:15:11.507985   14525 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0320 17:15:11.508000   14525 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0320 17:15:11.508046   14525 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0320 17:15:11.517907   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0320 17:15:11.518030   14525 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0320 17:15:11.519665   14525 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0320 17:15:11.519681   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0320 17:15:11.522170   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:11.531923   14525 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0320 17:15:11.531935   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0320 17:15:11.534761   14525 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0320 17:15:11.534780   14525 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:11.534828   14525 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:11.549511   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:11.562573   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:11.569674   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:11.571138   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:11.590336   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0320 17:15:11.590360   14525 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0320 17:15:11.590375   14525 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:11.590418   14525 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:11.590426   14525 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0320 17:15:11.590436   14525 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:11.590458   14525 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:11.590461   14525 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0320 17:15:11.601150   14525 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0320 17:15:11.601177   14525 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:11.601159   14525 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0320 17:15:11.601231   14525 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:11.601259   14525 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:11.601275   14525 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:11.610928   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0320 17:15:11.610945   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0320 17:15:11.626855   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0320 17:15:11.626938   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0320 17:15:11.627028   14525 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0320 17:15:11.628395   14525 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0320 17:15:11.628408   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0320 17:15:11.790264   14525 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0320 17:15:11.790285   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0320 17:15:11.929174   14525 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W0320 17:15:12.068787   14525 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0320 17:15:12.069022   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:12.089261   14525 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0320 17:15:12.089290   14525 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:12.089356   14525 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:12.108285   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0320 17:15:12.108427   14525 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0320 17:15:12.110064   14525 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0320 17:15:12.110080   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0320 17:15:12.136641   14525 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0320 17:15:12.136655   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0320 17:15:12.374264   14525 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0320 17:15:12.374301   14525 cache_images.go:92] duration metric: took 2.967848875s to LoadCachedImages
	W0320 17:15:12.374664   14525 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0320 17:15:12.374672   14525 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0320 17:15:12.374722   14525 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-184000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0320 17:15:12.374781   14525 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0320 17:15:12.388882   14525 cni.go:84] Creating CNI manager for ""
	I0320 17:15:12.388894   14525 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:15:12.388899   14525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0320 17:15:12.388907   14525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-184000 NodeName:stopped-upgrade-184000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0320 17:15:12.388979   14525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-184000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0320 17:15:12.389036   14525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0320 17:15:12.392107   14525 binaries.go:44] Found k8s binaries, skipping transfer
	I0320 17:15:12.392142   14525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0320 17:15:12.394532   14525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0320 17:15:12.399308   14525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0320 17:15:12.404048   14525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0320 17:15:12.409453   14525 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0320 17:15:12.410733   14525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0320 17:15:12.414261   14525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:12.478803   14525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0320 17:15:12.485462   14525 certs.go:68] Setting up /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000 for IP: 10.0.2.15
	I0320 17:15:12.485472   14525 certs.go:194] generating shared ca certs ...
	I0320 17:15:12.485481   14525 certs.go:226] acquiring lock for ca certs: {Name:mkd2fbb5d6b83ab66860b229253f0a2f91e28e51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:12.485949   14525 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.key
	I0320 17:15:12.486105   14525 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/proxy-client-ca.key
	I0320 17:15:12.486111   14525 certs.go:256] generating profile certs ...
	I0320 17:15:12.486317   14525 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/client.key
	I0320 17:15:12.486334   14525 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.key.9625c259
	I0320 17:15:12.486346   14525 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.crt.9625c259 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0320 17:15:12.576715   14525 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.crt.9625c259 ...
	I0320 17:15:12.576726   14525 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.crt.9625c259: {Name:mk0468bba07dc9aca965836ab6af72acc62992ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:12.577015   14525 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.key.9625c259 ...
	I0320 17:15:12.577024   14525 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.key.9625c259: {Name:mkd8a41300bff8629eed00d001fa8df356524ad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:12.577148   14525 certs.go:381] copying /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.crt.9625c259 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.crt
	I0320 17:15:12.577283   14525 certs.go:385] copying /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.key.9625c259 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.key
	I0320 17:15:12.577612   14525 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/proxy-client.key
	I0320 17:15:12.577758   14525 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/12665.pem (1338 bytes)
	W0320 17:15:12.577928   14525 certs.go:480] ignoring /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/12665_empty.pem, impossibly tiny 0 bytes
	I0320 17:15:12.577934   14525 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca-key.pem (1675 bytes)
	I0320 17:15:12.577954   14525 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem (1082 bytes)
	I0320 17:15:12.577972   14525 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem (1123 bytes)
	I0320 17:15:12.577989   14525 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/key.pem (1679 bytes)
	I0320 17:15:12.578026   14525 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem (1708 bytes)
	I0320 17:15:12.578646   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0320 17:15:12.585420   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0320 17:15:12.591924   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0320 17:15:12.598120   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0320 17:15:12.604737   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0320 17:15:12.610979   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0320 17:15:12.617920   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0320 17:15:12.625299   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0320 17:15:12.632627   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0320 17:15:12.638963   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/12665.pem --> /usr/share/ca-certificates/12665.pem (1338 bytes)
	I0320 17:15:12.645708   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem --> /usr/share/ca-certificates/126652.pem (1708 bytes)
	I0320 17:15:12.652728   14525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0320 17:15:12.657873   14525 ssh_runner.go:195] Run: openssl version
	I0320 17:15:12.659725   14525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0320 17:15:12.662571   14525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0320 17:15:12.664134   14525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 21 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0320 17:15:12.664156   14525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0320 17:15:12.665943   14525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0320 17:15:12.669319   14525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12665.pem && ln -fs /usr/share/ca-certificates/12665.pem /etc/ssl/certs/12665.pem"
	I0320 17:15:12.672500   14525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12665.pem
	I0320 17:15:12.674022   14525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 21 00:02 /usr/share/ca-certificates/12665.pem
	I0320 17:15:12.674052   14525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12665.pem
	I0320 17:15:12.675954   14525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12665.pem /etc/ssl/certs/51391683.0"
	I0320 17:15:12.678811   14525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126652.pem && ln -fs /usr/share/ca-certificates/126652.pem /etc/ssl/certs/126652.pem"
	I0320 17:15:12.682099   14525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126652.pem
	I0320 17:15:12.683737   14525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 21 00:02 /usr/share/ca-certificates/126652.pem
	I0320 17:15:12.683759   14525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126652.pem
	I0320 17:15:12.685511   14525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126652.pem /etc/ssl/certs/3ec20f2e.0"
	I0320 17:15:12.688817   14525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0320 17:15:12.690255   14525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0320 17:15:12.692567   14525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0320 17:15:12.694417   14525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0320 17:15:12.696613   14525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0320 17:15:12.698422   14525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0320 17:15:12.700266   14525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0320 17:15:12.702255   14525 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-184000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52304 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0320 17:15:12.702322   14525 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0320 17:15:12.714522   14525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0320 17:15:12.717814   14525 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0320 17:15:12.717820   14525 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0320 17:15:12.717823   14525 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0320 17:15:12.717840   14525 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0320 17:15:12.721163   14525 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0320 17:15:12.721196   14525 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-184000" does not appear in /Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:15:12.721210   14525 kubeconfig.go:62] /Users/jenkins/minikube-integration/17713-12248/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-184000" cluster setting kubeconfig missing "stopped-upgrade-184000" context setting]
	I0320 17:15:12.721932   14525 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/kubeconfig: {Name:mk87d9a95abf455216479b90714c3c5b079a6861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:12.723769   14525 kapi.go:59] client config for stopped-upgrade-184000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/client.key", CAFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105922a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0320 17:15:12.728402   14525 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0320 17:15:12.731114   14525 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-184000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0320 17:15:12.731119   14525 kubeadm.go:1154] stopping kube-system containers ...
	I0320 17:15:12.731155   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0320 17:15:12.741809   14525 docker.go:483] Stopping containers: [fd0f420578d3 00d63bb5d94b cd1b8729cc02 cd64e17cdade b0cb64b9aeb4 24c5f7227f44 0ad0b2b977a2 3017c60fa3bc]
	I0320 17:15:12.741875   14525 ssh_runner.go:195] Run: docker stop fd0f420578d3 00d63bb5d94b cd1b8729cc02 cd64e17cdade b0cb64b9aeb4 24c5f7227f44 0ad0b2b977a2 3017c60fa3bc
	I0320 17:15:12.752551   14525 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0320 17:15:12.757768   14525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0320 17:15:12.761099   14525 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0320 17:15:12.761105   14525 kubeadm.go:156] found existing configuration files:
	
	I0320 17:15:12.761126   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/admin.conf
	I0320 17:15:12.763769   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0320 17:15:12.763788   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0320 17:15:12.766232   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/kubelet.conf
	I0320 17:15:12.769265   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0320 17:15:12.769292   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0320 17:15:12.772061   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/controller-manager.conf
	I0320 17:15:12.774471   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0320 17:15:12.774492   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0320 17:15:12.777568   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/scheduler.conf
	I0320 17:15:12.780649   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0320 17:15:12.780669   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0320 17:15:12.783314   14525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0320 17:15:12.786233   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:12.807804   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:13.244402   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:13.359009   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:13.384252   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:13.407667   14525 api_server.go:52] waiting for apiserver process to appear ...
	I0320 17:15:13.407747   14525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:13.909769   14525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:14.409787   14525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:14.414947   14525 api_server.go:72] duration metric: took 1.007320084s to wait for apiserver process to appear ...
	I0320 17:15:14.414959   14525 api_server.go:88] waiting for apiserver healthz status ...
	I0320 17:15:14.415708   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:19.417306   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:19.417358   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:21.509916   14536 ssh_runner.go:235] Completed: sudo systemctl restart docker: (16.523606958s)
	I0320 17:15:21.510005   14536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0320 17:15:21.515319   14536 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0320 17:15:21.528245   14536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0320 17:15:21.533183   14536 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0320 17:15:21.607529   14536 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0320 17:15:21.698820   14536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:21.788336   14536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0320 17:15:21.794901   14536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0320 17:15:21.799119   14536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:21.890928   14536 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0320 17:15:21.930237   14536 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0320 17:15:21.930309   14536 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0320 17:15:21.933202   14536 start.go:562] Will wait 60s for crictl version
	I0320 17:15:21.933251   14536 ssh_runner.go:195] Run: which crictl
	I0320 17:15:21.934587   14536 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0320 17:15:21.945981   14536 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0320 17:15:21.946051   14536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0320 17:15:21.959240   14536 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0320 17:15:21.976857   14536 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0320 17:15:21.976984   14536 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0320 17:15:21.978479   14536 kubeadm.go:877] updating cluster {Name:running-upgrade-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52377 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:running-upgrade-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0320 17:15:21.978524   14536 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0320 17:15:21.978563   14536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0320 17:15:21.988743   14536 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0320 17:15:21.988755   14536 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0320 17:15:21.988801   14536 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0320 17:15:21.992184   14536 ssh_runner.go:195] Run: which lz4
	I0320 17:15:21.993431   14536 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0320 17:15:21.994692   14536 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0320 17:15:21.994704   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0320 17:15:24.417649   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:24.417714   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:22.719991   14536 docker.go:649] duration metric: took 726.617125ms to copy over tarball
	I0320 17:15:22.720051   14536 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0320 17:15:24.088452   14536 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.3684405s)
	I0320 17:15:24.088467   14536 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0320 17:15:24.103948   14536 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0320 17:15:24.106839   14536 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0320 17:15:24.112071   14536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:24.196364   14536 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0320 17:15:25.405175   14536 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.208841708s)
	I0320 17:15:25.405262   14536 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0320 17:15:25.418527   14536 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0320 17:15:25.418537   14536 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0320 17:15:25.418542   14536 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0320 17:15:25.424645   14536 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:25.424672   14536 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:25.424738   14536 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:25.424819   14536 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:25.424837   14536 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0320 17:15:25.424875   14536 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:25.424930   14536 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:25.425160   14536 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:25.433183   14536 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:25.433224   14536 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:25.433334   14536 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0320 17:15:25.433365   14536 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:25.433571   14536 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:25.433590   14536 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:25.433755   14536 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:25.434129   14536 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:29.418048   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:29.418074   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:27.480453   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0320 17:15:27.497979   14536 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0320 17:15:27.498013   14536 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0320 17:15:27.498077   14536 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0320 17:15:27.510980   14536 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0320 17:15:27.511086   14536 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0320 17:15:27.512897   14536 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0320 17:15:27.512907   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0320 17:15:27.521739   14536 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0320 17:15:27.521748   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0320 17:15:27.543100   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:27.559561   14536 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0320 17:15:27.559657   14536 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0320 17:15:27.559673   14536 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:27.559725   14536 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:27.570133   14536 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0320 17:15:27.573970   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:27.585016   14536 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0320 17:15:27.585039   14536 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:27.585090   14536 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:27.595328   14536 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0320 17:15:27.603548   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:27.613739   14536 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0320 17:15:27.613757   14536 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:27.613807   14536 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:27.622484   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:27.623695   14536 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	W0320 17:15:27.630781   14536 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0320 17:15:27.630901   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:27.630955   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:27.635107   14536 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0320 17:15:27.635126   14536 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:27.635169   14536 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:27.656713   14536 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0320 17:15:27.656733   14536 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:27.656735   14536 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0320 17:15:27.656745   14536 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:27.656787   14536 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:27.656788   14536 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:27.658048   14536 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0320 17:15:27.671418   14536 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0320 17:15:27.671426   14536 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0320 17:15:27.671511   14536 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0320 17:15:27.671512   14536 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0320 17:15:27.673246   14536 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0320 17:15:27.673264   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0320 17:15:27.673285   14536 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0320 17:15:27.673295   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0320 17:15:27.743169   14536 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0320 17:15:27.743185   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0320 17:15:27.845737   14536 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0320 17:15:27.921809   14536 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0320 17:15:27.921823   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0320 17:15:28.055456   14536 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W0320 17:15:28.145932   14536 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0320 17:15:28.146072   14536 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:28.160557   14536 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0320 17:15:28.160593   14536 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:28.160652   14536 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:28.172434   14536 cache_images.go:92] duration metric: took 2.753990958s to LoadCachedImages
	W0320 17:15:28.172479   14536 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0320 17:15:28.172484   14536 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0320 17:15:28.172526   14536 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=running-upgrade-120000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:running-upgrade-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0320 17:15:28.172586   14536 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0320 17:15:28.192256   14536 cni.go:84] Creating CNI manager for ""
	I0320 17:15:28.192271   14536 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:15:28.192276   14536 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0320 17:15:28.192284   14536 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-120000 NodeName:running-upgrade-120000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0320 17:15:28.192345   14536 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "running-upgrade-120000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0320 17:15:28.192408   14536 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0320 17:15:28.195832   14536 binaries.go:44] Found k8s binaries, skipping transfer
	I0320 17:15:28.195861   14536 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0320 17:15:28.198875   14536 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0320 17:15:28.204287   14536 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0320 17:15:28.209081   14536 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0320 17:15:28.213994   14536 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0320 17:15:28.215144   14536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:28.294418   14536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0320 17:15:28.299322   14536 certs.go:68] Setting up /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000 for IP: 10.0.2.15
	I0320 17:15:28.299328   14536 certs.go:194] generating shared ca certs ...
	I0320 17:15:28.299336   14536 certs.go:226] acquiring lock for ca certs: {Name:mkd2fbb5d6b83ab66860b229253f0a2f91e28e51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:28.299465   14536 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.key
	I0320 17:15:28.300169   14536 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/proxy-client-ca.key
	I0320 17:15:28.300179   14536 certs.go:256] generating profile certs ...
	I0320 17:15:28.300394   14536 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/client.key
	I0320 17:15:28.300414   14536 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.key.4a2d3659
	I0320 17:15:28.300426   14536 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.crt.4a2d3659 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0320 17:15:28.369805   14536 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.crt.4a2d3659 ...
	I0320 17:15:28.369814   14536 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.crt.4a2d3659: {Name:mkeb8e266eef5a9d37ac873d9939314a9ef807ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:28.370074   14536 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.key.4a2d3659 ...
	I0320 17:15:28.370079   14536 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.key.4a2d3659: {Name:mk6cd5a274a6333bd53e1a8bc233b17cafd71792 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:28.370223   14536 certs.go:381] copying /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.crt.4a2d3659 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.crt
	I0320 17:15:28.370355   14536 certs.go:385] copying /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.key.4a2d3659 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.key
	I0320 17:15:28.370611   14536 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/proxy-client.key
	I0320 17:15:28.370794   14536 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/12665.pem (1338 bytes)
	W0320 17:15:28.370950   14536 certs.go:480] ignoring /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/12665_empty.pem, impossibly tiny 0 bytes
	I0320 17:15:28.370955   14536 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca-key.pem (1675 bytes)
	I0320 17:15:28.371015   14536 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem (1082 bytes)
	I0320 17:15:28.371068   14536 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem (1123 bytes)
	I0320 17:15:28.371120   14536 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/key.pem (1679 bytes)
	I0320 17:15:28.371185   14536 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem (1708 bytes)
	I0320 17:15:28.372671   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0320 17:15:28.380703   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0320 17:15:28.387579   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0320 17:15:28.394645   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0320 17:15:28.401304   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0320 17:15:28.407634   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0320 17:15:28.414266   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0320 17:15:28.421071   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0320 17:15:28.427789   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0320 17:15:28.434961   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/12665.pem --> /usr/share/ca-certificates/12665.pem (1338 bytes)
	I0320 17:15:28.441409   14536 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem --> /usr/share/ca-certificates/126652.pem (1708 bytes)
	I0320 17:15:28.447898   14536 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0320 17:15:28.452938   14536 ssh_runner.go:195] Run: openssl version
	I0320 17:15:28.454724   14536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0320 17:15:28.457750   14536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0320 17:15:28.459168   14536 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 21 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0320 17:15:28.459187   14536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0320 17:15:28.461008   14536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0320 17:15:28.463806   14536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12665.pem && ln -fs /usr/share/ca-certificates/12665.pem /etc/ssl/certs/12665.pem"
	I0320 17:15:28.467043   14536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12665.pem
	I0320 17:15:28.468620   14536 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 21 00:02 /usr/share/ca-certificates/12665.pem
	I0320 17:15:28.468641   14536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12665.pem
	I0320 17:15:28.470663   14536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12665.pem /etc/ssl/certs/51391683.0"
	I0320 17:15:28.473492   14536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126652.pem && ln -fs /usr/share/ca-certificates/126652.pem /etc/ssl/certs/126652.pem"
	I0320 17:15:28.476496   14536 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126652.pem
	I0320 17:15:28.478082   14536 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 21 00:02 /usr/share/ca-certificates/126652.pem
	I0320 17:15:28.478101   14536 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126652.pem
	I0320 17:15:28.479763   14536 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126652.pem /etc/ssl/certs/3ec20f2e.0"
	I0320 17:15:28.482783   14536 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0320 17:15:28.484217   14536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0320 17:15:28.485986   14536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0320 17:15:28.487604   14536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0320 17:15:28.489517   14536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0320 17:15:28.492030   14536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0320 17:15:28.493801   14536 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0320 17:15:28.495672   14536 kubeadm.go:391] StartCluster: {Name:running-upgrade-120000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52377 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:ru
nning-upgrade-120000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0320 17:15:28.495742   14536 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0320 17:15:28.506717   14536 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0320 17:15:28.509729   14536 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0320 17:15:28.509736   14536 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0320 17:15:28.509738   14536 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0320 17:15:28.509763   14536 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0320 17:15:28.512977   14536 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0320 17:15:28.513424   14536 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-120000" does not appear in /Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:15:28.513532   14536 kubeconfig.go:62] /Users/jenkins/minikube-integration/17713-12248/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-120000" cluster setting kubeconfig missing "running-upgrade-120000" context setting]
	I0320 17:15:28.513728   14536 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/kubeconfig: {Name:mk87d9a95abf455216479b90714c3c5b079a6861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:28.514323   14536 kapi.go:59] client config for running-upgrade-120000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/client.key", CAFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10427ea80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0320 17:15:28.514767   14536 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0320 17:15:28.517681   14536 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "running-upgrade-120000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0320 17:15:28.517686   14536 kubeadm.go:1154] stopping kube-system containers ...
	I0320 17:15:28.517727   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0320 17:15:28.530517   14536 docker.go:483] Stopping containers: [4547de6e8257 a3b0979ad580 c543a8961041 ebe1641c4b92 e78ca58831f2 4e99a40742ea 42b87e69a0de 21c908c7689b 1077c3ad79cc b64b6f297233 9fa8564030f4 4c9bf4f7f50b c7d7f3fe21ac 5bcf20efa75f c6cd4fa42b8a e9c88a954eeb 8322a1713029 85bde627ac6c a7bf1291ce82 b95a6d9d3140 24ffbf9ce89f a75f28371b8a 6f17617dea23]
	I0320 17:15:28.530589   14536 ssh_runner.go:195] Run: docker stop 4547de6e8257 a3b0979ad580 c543a8961041 ebe1641c4b92 e78ca58831f2 4e99a40742ea 42b87e69a0de 21c908c7689b 1077c3ad79cc b64b6f297233 9fa8564030f4 4c9bf4f7f50b c7d7f3fe21ac 5bcf20efa75f c6cd4fa42b8a e9c88a954eeb 8322a1713029 85bde627ac6c a7bf1291ce82 b95a6d9d3140 24ffbf9ce89f a75f28371b8a 6f17617dea23
	I0320 17:15:28.542691   14536 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0320 17:15:28.627750   14536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0320 17:15:28.631555   14536 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5643 Mar 21 00:14 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5649 Mar 21 00:14 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2027 Mar 21 00:14 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5597 Mar 21 00:14 /etc/kubernetes/scheduler.conf
	
	I0320 17:15:28.631582   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/admin.conf
	I0320 17:15:28.634763   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/admin.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0320 17:15:28.634801   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0320 17:15:28.637485   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/kubelet.conf
	I0320 17:15:28.639951   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0320 17:15:28.639977   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0320 17:15:28.642800   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/controller-manager.conf
	I0320 17:15:28.645454   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0320 17:15:28.645483   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0320 17:15:28.648004   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/scheduler.conf
	I0320 17:15:28.650937   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0320 17:15:28.650956   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0320 17:15:28.653585   14536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0320 17:15:28.656260   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:28.686818   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:29.075616   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:29.327829   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:29.355684   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:29.381847   14536 api_server.go:52] waiting for apiserver process to appear ...
	I0320 17:15:29.381941   14536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:29.883961   14536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:30.383978   14536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:30.883979   14536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:31.383895   14536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:31.883888   14536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:31.888141   14536 api_server.go:72] duration metric: took 2.5063855s to wait for apiserver process to appear ...
	I0320 17:15:31.888153   14536 api_server.go:88] waiting for apiserver healthz status ...
	I0320 17:15:31.888170   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:34.418512   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:34.418536   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:36.890041   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:36.890070   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:39.419095   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:39.419131   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:41.890300   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:41.890322   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:44.420392   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:44.420414   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:46.890462   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:46.890510   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:49.421590   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:49.421638   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:51.891106   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:51.891160   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:54.422868   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:54.422970   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:56.891567   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:56.891613   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:59.425244   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:59.425294   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:01.892283   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:01.892340   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:04.425961   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:04.426016   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:06.893766   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:06.893834   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:09.428141   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:09.428205   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:11.895435   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:11.895474   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:14.430459   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:14.431977   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:14.467800   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:16:14.467936   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:14.488660   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:16:14.488766   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:14.503520   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:16:14.503605   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:14.516539   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:16:14.516617   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:14.527982   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:16:14.528051   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:14.538283   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:16:14.538353   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:14.548968   14525 logs.go:276] 0 containers: []
	W0320 17:16:14.548981   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:14.549035   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:14.559932   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:16:14.559949   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:16:14.559955   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:16:14.574212   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:16:14.574220   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:16:14.585846   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:16:14.585856   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:16:14.598121   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:16:14.598135   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:14.609898   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:14.609908   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:14.646983   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:14.646992   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:14.650939   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:14.650945   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:14.761586   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:16:14.761599   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:16:14.772602   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:16:14.772613   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:16:14.790682   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:16:14.790692   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:16:14.817542   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:16:14.817551   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:16:14.837886   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:16:14.837897   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:16:14.849811   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:16:14.849822   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:16:14.864500   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:16:14.864515   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:16:14.878991   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:16:14.879000   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:16:14.896369   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:16:14.896380   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:16:14.907630   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:14.907654   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:16.897479   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:16.897523   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:17.432813   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:21.899004   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:21.899075   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:22.434907   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:22.435273   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:22.468226   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:16:22.468352   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:22.488017   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:16:22.488109   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:22.502614   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:16:22.502687   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:22.514724   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:16:22.514805   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:22.525956   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:16:22.526016   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:22.536823   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:16:22.536881   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:22.547224   14525 logs.go:276] 0 containers: []
	W0320 17:16:22.547235   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:22.547294   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:22.558272   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:16:22.558289   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:22.558306   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:22.582643   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:16:22.582653   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:16:22.594256   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:16:22.594266   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:16:22.608518   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:16:22.608531   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:16:22.622830   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:16:22.622845   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:16:22.634728   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:16:22.634739   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:16:22.649988   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:16:22.650002   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:16:22.661858   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:16:22.661873   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:16:22.673702   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:22.673712   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:22.678276   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:22.678281   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:22.714661   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:16:22.714671   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:16:22.728552   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:16:22.728560   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:16:22.739594   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:16:22.739603   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:16:22.757389   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:16:22.757405   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:16:22.768661   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:22.768669   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:22.807046   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:16:22.807055   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:22.819716   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:16:22.819729   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:16:25.350753   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:26.901441   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:26.901519   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:30.351186   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:30.351408   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:30.368362   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:16:30.368444   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:30.382056   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:16:30.382129   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:30.398889   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:16:30.398950   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:30.409816   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:16:30.409890   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:30.420639   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:16:30.420704   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:30.440035   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:16:30.440097   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:30.453838   14525 logs.go:276] 0 containers: []
	W0320 17:16:30.453856   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:30.453911   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:30.464618   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:16:30.464644   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:30.464652   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:30.501115   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:16:30.501125   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:16:30.515508   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:16:30.515518   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:16:30.529323   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:16:30.529331   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:16:30.541145   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:30.541157   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:30.580199   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:16:30.580210   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:16:30.591771   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:16:30.591780   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:16:30.607225   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:16:30.607235   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:16:30.627931   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:30.627942   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:30.632009   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:16:30.632016   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:16:30.656482   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:16:30.656494   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:16:30.668297   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:16:30.668310   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:16:30.683906   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:16:30.683918   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:16:30.702076   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:16:30.702086   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:16:30.714241   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:16:30.714250   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:16:30.726353   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:30.726363   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:30.750386   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:16:30.750396   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:31.903849   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:31.904065   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:31.929158   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:16:31.929299   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:31.947686   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:16:31.947770   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:31.960842   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:16:31.960930   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:31.972915   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:16:31.972985   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:31.982999   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:16:31.983058   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:31.993563   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:16:31.993645   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:32.003919   14536 logs.go:276] 0 containers: []
	W0320 17:16:32.003933   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:32.003990   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:32.014954   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:16:32.014972   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:16:32.014978   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:16:32.027383   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:32.027396   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:32.126799   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:16:32.126813   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:16:32.166669   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:16:32.166680   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:16:32.178724   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:16:32.178737   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:16:32.190737   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:16:32.190754   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:16:32.209038   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:16:32.209047   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:16:33.264359   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:32.224606   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:16:32.224880   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:16:32.237441   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:16:32.237453   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:16:32.257938   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:16:32.257949   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:16:32.269307   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:16:32.269318   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:16:32.282057   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:16:32.282069   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:16:32.293706   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:16:32.293718   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:16:32.305304   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:16:32.305314   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:32.317439   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:32.317452   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:32.322460   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:16:32.322466   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:16:32.336748   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:16:32.336761   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:16:32.350603   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:32.350616   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:32.390754   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:32.390765   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:34.916830   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:38.266471   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:38.266737   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:38.286672   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:16:38.286770   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:38.303272   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:16:38.303356   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:38.314833   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:16:38.314906   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:38.325480   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:16:38.325548   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:38.335719   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:16:38.335793   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:38.346228   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:16:38.346315   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:38.356504   14525 logs.go:276] 0 containers: []
	W0320 17:16:38.356519   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:38.356578   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:38.367139   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:16:38.367161   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:38.367166   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:38.390398   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:16:38.390406   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:16:38.401779   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:16:38.401794   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:16:38.429597   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:16:38.429610   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:16:38.443241   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:16:38.443250   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:16:38.457596   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:16:38.457609   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:16:38.474897   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:16:38.474907   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:38.486425   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:38.486437   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:38.524417   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:38.524432   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:38.529265   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:16:38.529274   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:16:38.540252   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:16:38.540265   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:16:38.555569   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:16:38.555579   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:16:38.567545   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:16:38.567556   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:16:38.578969   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:38.578979   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:38.618288   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:16:38.618299   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:16:38.630701   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:16:38.630712   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:16:38.641747   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:16:38.641758   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:16:41.157179   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:39.919385   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:39.919654   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:39.944683   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:16:39.944802   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:39.963657   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:16:39.963756   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:39.990796   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:16:39.990867   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:40.016529   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:16:40.016604   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:40.028850   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:16:40.028920   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:40.040081   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:16:40.040143   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:40.053598   14536 logs.go:276] 0 containers: []
	W0320 17:16:40.053610   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:40.053669   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:40.068676   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:16:40.068692   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:16:40.068698   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:16:40.086260   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:16:40.086271   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:16:40.098479   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:16:40.098490   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:16:40.110559   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:16:40.110572   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:16:40.127455   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:16:40.127467   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:16:40.139235   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:16:40.139245   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:40.151153   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:40.151165   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:40.191071   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:16:40.191079   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:16:40.227709   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:16:40.227722   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:16:40.242361   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:40.242373   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:40.270298   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:40.270309   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:40.308616   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:16:40.308628   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:16:40.322822   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:16:40.322833   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:16:40.337225   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:16:40.337236   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:16:40.349079   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:16:40.349090   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:16:40.361231   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:16:40.361242   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:16:40.372619   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:16:40.372632   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:16:40.384129   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:40.384139   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:40.388686   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:16:40.388695   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:16:46.159331   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:46.159589   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:46.181934   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:16:46.182035   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:46.197968   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:16:46.198047   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:46.210549   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:16:46.210630   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:46.221416   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:16:46.221492   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:46.232344   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:16:46.232417   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:46.242734   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:16:46.242800   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:46.252685   14525 logs.go:276] 0 containers: []
	W0320 17:16:46.252695   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:46.252763   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:46.263272   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:16:46.263289   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:46.263297   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:46.301149   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:16:46.301160   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:16:46.315181   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:16:46.315190   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:16:46.336862   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:16:46.336871   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:16:46.351917   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:46.351927   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:46.375656   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:16:46.375663   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:46.387858   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:16:46.387869   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:16:46.409023   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:46.409035   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:46.413563   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:16:46.413570   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:16:46.438417   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:16:46.438429   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:16:46.452583   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:16:46.452596   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:16:46.471334   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:16:46.471347   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:16:46.482867   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:16:46.482878   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:16:46.495181   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:16:46.495193   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:16:46.506517   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:16:46.506529   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:16:46.519456   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:46.519466   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:46.558106   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:16:46.558122   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:16:42.907878   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:49.075680   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:47.908546   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:47.908851   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:47.944645   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:16:47.944787   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:47.965154   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:16:47.965251   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:47.981284   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:16:47.981354   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:47.993378   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:16:47.993454   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:48.003984   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:16:48.004044   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:48.014622   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:16:48.014685   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:48.028756   14536 logs.go:276] 0 containers: []
	W0320 17:16:48.028767   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:48.028826   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:48.039787   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:16:48.039806   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:16:48.039811   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:16:48.080229   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:16:48.080251   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:16:48.093697   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:48.093711   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:48.120874   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:16:48.120888   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:16:48.133501   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:48.133514   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:48.168785   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:16:48.168796   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:16:48.182860   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:16:48.182874   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:16:48.194923   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:16:48.194933   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:16:48.206640   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:16:48.206651   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:16:48.218011   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:16:48.218025   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:16:48.229342   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:48.229361   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:48.234082   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:16:48.234091   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:16:48.248043   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:16:48.248054   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:16:48.270255   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:16:48.270265   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:16:48.281310   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:16:48.281323   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:16:48.304857   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:48.304868   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:48.344803   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:16:48.344815   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:16:48.368947   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:16:48.368959   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:16:48.381084   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:16:48.381095   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:50.893654   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:54.077977   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:54.078487   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:54.115368   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:16:54.115504   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:54.136364   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:16:54.136458   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:54.151148   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:16:54.151222   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:54.167452   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:16:54.167533   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:54.178590   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:16:54.178657   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:54.188976   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:16:54.189047   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:54.199232   14525 logs.go:276] 0 containers: []
	W0320 17:16:54.199242   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:54.199309   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:54.209887   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:16:54.209918   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:16:54.209924   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:54.221787   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:16:54.221797   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:16:54.234910   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:16:54.234922   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:16:54.247561   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:16:54.247570   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:16:54.258849   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:54.258860   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:54.283195   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:16:54.283204   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:16:54.308082   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:54.308093   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:54.312877   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:54.312885   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:54.347914   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:16:54.347925   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:16:54.362051   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:16:54.362063   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:16:54.376138   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:16:54.376148   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:16:54.388400   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:16:54.388410   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:16:54.404207   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:16:54.404218   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:16:54.415951   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:54.415966   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:54.455024   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:16:54.455033   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:16:54.477058   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:16:54.477069   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:16:54.488352   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:16:54.488363   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:16:55.895740   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:55.896118   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:55.930031   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:16:55.930164   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:55.950399   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:16:55.950490   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:55.964892   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:16:55.964972   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:55.977180   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:16:55.977242   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:55.987411   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:16:55.987478   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:55.997910   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:16:55.997969   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:56.008840   14536 logs.go:276] 0 containers: []
	W0320 17:16:56.008850   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:56.008902   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:56.019366   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:16:56.019380   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:56.019385   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:56.060978   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:16:56.060991   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:16:56.072806   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:16:56.072823   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:56.086012   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:16:56.086023   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:16:56.098048   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:56.098060   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:56.124119   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:16:56.124129   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:16:56.139286   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:16:56.139297   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:16:56.150877   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:16:56.150887   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:16:56.167300   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:16:56.167312   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:16:56.181853   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:16:56.181865   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:16:56.199159   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:16:56.199170   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:16:56.210309   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:16:56.210320   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:16:56.224682   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:16:56.224693   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:16:56.262037   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:16:56.262048   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:16:56.276151   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:16:56.276162   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:16:56.288291   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:56.288301   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:56.293249   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:56.293256   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:56.327633   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:16:56.327645   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:16:56.342467   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:16:56.342478   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:16:57.006001   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:58.856430   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:02.008502   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:02.008857   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:02.048425   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:02.048553   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:02.066139   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:02.066233   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:02.078987   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:02.079060   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:02.091101   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:02.091187   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:02.101751   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:02.101821   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:02.112265   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:02.112327   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:02.122802   14525 logs.go:276] 0 containers: []
	W0320 17:17:02.122814   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:02.122875   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:02.133242   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:02.133259   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:02.133265   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:02.144801   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:02.144811   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:02.169818   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:02.169828   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:02.183758   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:02.183768   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:02.195240   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:02.195249   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:02.206776   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:02.206785   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:02.217994   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:02.218004   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:02.222233   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:02.222239   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:02.236222   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:02.236230   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:02.247285   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:02.247303   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:02.262319   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:02.262329   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:02.284304   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:02.284313   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:17:02.301565   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:02.301575   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:02.335612   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:02.335625   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:02.349221   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:02.349234   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:02.360945   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:02.360956   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:02.400511   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:02.400522   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:04.927331   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:03.857168   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:03.857363   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:03.878697   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:03.878813   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:03.893715   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:03.893791   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:03.906498   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:03.906579   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:03.917730   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:03.917801   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:03.928379   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:03.928453   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:03.939226   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:03.939302   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:03.949059   14536 logs.go:276] 0 containers: []
	W0320 17:17:03.949071   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:03.949118   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:03.959765   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:03.959784   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:03.959790   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:03.974044   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:03.974054   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:03.985459   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:03.985469   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:03.996398   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:03.996412   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:04.027550   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:04.027561   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:04.041443   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:04.041454   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:04.076755   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:04.076770   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:04.116508   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:04.116527   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:04.128224   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:04.128236   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:04.140037   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:04.140049   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:04.151549   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:04.151565   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:04.192393   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:04.192413   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:04.222122   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:04.222137   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:04.251134   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:04.251145   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:17:04.262903   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:04.262914   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:04.280785   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:04.280797   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:04.293176   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:04.293188   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:04.298021   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:04.298027   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:04.316858   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:04.316869   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:06.830344   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:09.928131   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:09.928336   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:09.946408   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:09.946512   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:09.960804   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:09.960885   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:09.972606   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:09.972689   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:09.982996   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:09.983070   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:09.992811   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:09.992872   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:10.007400   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:10.007469   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:10.018430   14525 logs.go:276] 0 containers: []
	W0320 17:17:10.018444   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:10.018503   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:10.029092   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:10.029110   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:10.029116   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:10.040899   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:10.040912   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:10.075680   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:10.075692   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:10.088001   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:10.088014   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:17:10.106772   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:10.106782   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:10.118332   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:10.118342   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:10.132598   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:10.132609   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:10.156954   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:10.156962   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:10.182354   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:10.182365   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:10.196438   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:10.196450   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:10.211097   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:10.211109   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:10.222646   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:10.222657   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:10.234255   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:10.234268   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:10.245564   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:10.245577   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:10.284095   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:10.284103   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:10.288854   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:10.288863   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:10.306091   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:10.306101   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:11.832794   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:11.832959   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:11.849059   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:11.849132   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:11.864930   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:11.865003   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:11.875525   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:11.875594   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:11.885892   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:11.885955   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:11.900380   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:11.900447   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:11.911048   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:11.911118   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:11.921656   14536 logs.go:276] 0 containers: []
	W0320 17:17:11.921669   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:11.921720   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:11.932276   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:11.932294   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:11.932301   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:11.943569   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:11.943581   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:11.983792   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:11.983802   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:11.997646   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:11.997658   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:12.009317   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:12.009327   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:17:12.021328   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:12.021337   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:12.033037   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:12.033052   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:12.062333   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:12.062343   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:12.074448   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:12.074459   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:12.091230   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:12.091246   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:12.095675   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:12.095682   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:12.111707   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:12.111716   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:12.149730   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:12.149743   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:12.163471   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:12.163480   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:12.177898   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:12.177910   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:12.194772   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:12.194781   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:12.822897   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:12.229938   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:12.229949   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:12.241649   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:12.241659   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:12.252767   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:12.252777   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:14.767089   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:17.824663   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:17.824795   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:17.835807   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:17.835881   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:17.847096   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:17.847158   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:17.857637   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:17.857697   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:17.869115   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:17.869185   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:17.880018   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:17.880089   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:17.891044   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:17.891120   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:17.900933   14525 logs.go:276] 0 containers: []
	W0320 17:17:17.900943   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:17.901003   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:17.911503   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:17.911521   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:17.911526   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:17.915678   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:17.915687   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:17.940245   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:17.940255   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:17.953736   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:17.953746   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:17.968727   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:17.968740   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:17.980184   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:17.980194   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:17.994636   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:17.994645   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:17:18.011797   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:18.011808   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:18.022959   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:18.022971   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:18.046294   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:18.046302   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:18.057738   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:18.057750   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:18.094870   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:18.094878   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:18.130682   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:18.130692   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:18.144604   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:18.144615   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:18.155698   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:18.155707   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:18.171946   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:18.171956   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:18.183132   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:18.183145   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:20.698253   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:19.769267   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:19.769377   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:19.780406   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:19.780484   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:19.791415   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:19.791494   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:19.801758   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:19.801832   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:19.811558   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:19.811615   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:19.821945   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:19.822015   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:19.832498   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:19.832568   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:19.842839   14536 logs.go:276] 0 containers: []
	W0320 17:17:19.842851   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:19.842910   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:19.853267   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:19.853283   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:19.853288   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:19.892303   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:19.892315   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:19.904361   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:19.904375   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:19.916170   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:19.916183   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:19.928026   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:19.928037   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:19.966015   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:19.966026   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:19.979207   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:19.979216   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:19.990441   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:19.990456   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:20.002199   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:20.002210   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:20.006480   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:20.006487   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:20.017415   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:20.017425   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:20.034698   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:20.034709   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:20.051885   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:20.051895   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:20.094554   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:20.094566   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:20.108466   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:20.108478   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:20.122440   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:20.122453   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:20.136484   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:20.136497   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:20.148765   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:20.148776   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:17:20.160833   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:20.160844   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:25.698480   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:25.698660   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:25.713348   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:25.713430   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:25.724997   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:25.725071   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:25.735754   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:25.735819   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:25.747591   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:25.747656   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:25.759131   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:25.759203   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:25.769709   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:25.769776   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:25.779844   14525 logs.go:276] 0 containers: []
	W0320 17:17:25.779857   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:25.779909   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:25.790389   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:25.790407   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:25.790413   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:25.794727   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:25.794734   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:25.809395   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:25.809406   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:25.820741   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:25.820753   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:25.832344   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:25.832353   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:25.843678   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:25.843688   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:25.868452   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:25.868462   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:25.903516   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:25.903526   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:25.917844   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:25.917854   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:25.929815   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:25.929828   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:25.941206   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:25.941216   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:25.952355   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:25.952367   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:25.989097   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:25.989105   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:26.003637   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:26.003648   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:26.018818   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:26.018829   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:17:26.042833   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:26.042843   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:26.067960   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:26.067970   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:22.689334   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:28.581621   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:27.691832   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:27.692050   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:27.713549   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:27.713643   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:27.728283   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:27.728359   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:27.740559   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:27.740625   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:27.753273   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:27.753347   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:27.763994   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:27.764071   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:27.774232   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:27.774303   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:27.784388   14536 logs.go:276] 0 containers: []
	W0320 17:17:27.784398   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:27.784452   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:27.795812   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:27.795826   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:27.795831   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:27.812543   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:27.812555   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:27.839370   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:27.839380   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:27.857367   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:27.857378   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:27.869382   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:27.869392   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:27.880644   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:27.880655   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:27.891905   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:27.891916   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:27.905227   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:27.905237   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:27.917096   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:27.917107   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:27.929627   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:27.929638   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:27.970831   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:27.970843   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:28.007311   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:28.007324   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:28.018169   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:28.018181   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:17:28.030878   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:28.030887   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:28.066910   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:28.066921   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:28.083235   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:28.083245   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:28.104582   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:28.104594   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:28.121368   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:28.121380   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:28.126063   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:28.126073   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:30.641970   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:33.583751   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:33.583885   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:33.602191   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:33.602271   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:33.615765   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:33.615839   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:33.626620   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:33.626692   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:33.636935   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:33.637004   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:33.653567   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:33.653639   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:33.664190   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:33.664248   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:33.674590   14525 logs.go:276] 0 containers: []
	W0320 17:17:33.674604   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:33.674658   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:33.684992   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:33.685009   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:33.685015   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:33.696547   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:33.696561   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:33.707608   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:33.707620   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:33.733420   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:33.733428   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:33.737442   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:33.737449   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:33.757108   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:33.757123   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:33.782748   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:33.782762   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:33.797630   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:33.797642   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:33.809270   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:33.809282   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:33.845181   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:33.845192   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:33.856470   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:33.856481   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:33.870194   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:33.870207   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:33.885969   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:33.885981   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:17:33.903451   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:33.903461   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:33.941217   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:33.941228   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:33.957447   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:33.957457   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:33.969324   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:33.969338   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:36.482964   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:35.644166   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:35.644307   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:35.658588   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:35.658669   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:35.670777   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:35.670849   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:35.681241   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:35.681312   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:35.691940   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:35.692008   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:35.702139   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:35.702203   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:35.718444   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:35.718520   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:35.729127   14536 logs.go:276] 0 containers: []
	W0320 17:17:35.729139   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:35.729197   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:35.739405   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:35.739423   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:35.739428   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:35.751351   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:35.751363   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:35.763268   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:35.763285   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:35.789265   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:35.789274   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:35.793571   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:35.793581   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:35.804987   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:35.804997   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:35.821901   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:35.821910   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:17:35.834028   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:35.834040   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:35.849791   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:35.849802   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:35.861854   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:35.861864   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:35.873060   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:35.873071   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:35.884425   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:35.884438   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:35.897946   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:35.897955   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:35.934208   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:35.934219   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:35.949529   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:35.949540   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:35.964283   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:35.964292   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:36.005695   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:36.005704   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:36.041212   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:36.041225   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:36.058909   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:36.058922   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:41.485177   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:41.485364   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:41.497320   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:41.497399   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:41.507649   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:41.507715   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:41.518013   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:41.518083   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:41.528461   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:41.528525   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:41.538920   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:41.538982   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:41.549189   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:41.549255   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:41.559525   14525 logs.go:276] 0 containers: []
	W0320 17:17:41.559536   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:41.559590   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:41.574461   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:41.574478   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:41.574482   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:41.598999   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:41.599017   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:41.638532   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:41.638549   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:41.673650   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:41.673663   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:41.685844   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:41.685854   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:41.697493   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:41.697504   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:41.710983   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:41.710993   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:17:41.728521   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:41.728533   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:38.577324   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:41.741602   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:41.743748   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:41.755272   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:41.755284   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:41.767321   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:41.767331   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:41.778890   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:41.778901   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:41.782903   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:41.782910   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:41.807552   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:41.807562   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:41.821677   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:41.821687   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:41.835685   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:41.835696   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:41.852269   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:41.852278   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:44.366300   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:43.579692   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:43.579877   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:43.596176   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:43.596286   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:43.609786   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:43.609880   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:43.620899   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:43.620974   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:43.632235   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:43.632320   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:43.643675   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:43.643760   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:43.655098   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:43.655184   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:43.665339   14536 logs.go:276] 0 containers: []
	W0320 17:17:43.665353   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:43.665424   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:43.676258   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:43.676276   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:43.676282   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:43.702938   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:43.702950   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:43.708199   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:43.708205   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:43.744908   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:43.744922   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:43.773942   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:43.773952   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:43.786765   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:43.786777   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:43.813901   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:43.813910   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:43.825159   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:43.825171   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:43.862525   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:43.862538   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:43.874298   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:43.874311   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:17:43.891131   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:43.891143   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:43.902326   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:43.902338   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:43.922196   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:43.922208   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:43.962933   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:43.962948   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:43.976851   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:43.976861   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:43.991070   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:43.991084   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:44.002885   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:44.002898   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:44.014470   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:44.014483   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:44.030686   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:44.030698   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:46.545128   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:49.367558   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:49.367920   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:49.392859   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:49.392979   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:49.409724   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:49.409806   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:49.428725   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:49.428799   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:49.439152   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:49.439220   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:49.449893   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:49.449961   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:49.460162   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:49.460223   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:49.470457   14525 logs.go:276] 0 containers: []
	W0320 17:17:49.470468   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:49.470526   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:49.481309   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:49.481326   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:49.481332   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:49.495316   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:49.495328   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:49.506829   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:49.506839   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:49.517765   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:49.517775   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:49.554643   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:49.554659   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:49.568366   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:49.568379   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:49.582010   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:49.582020   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:17:49.599122   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:49.599131   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:49.603466   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:49.603473   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:49.641074   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:49.641086   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:49.657026   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:49.657038   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:49.669364   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:49.669379   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:49.684756   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:49.684769   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:49.696339   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:49.696351   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:49.710116   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:49.710126   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:49.732765   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:49.732775   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:49.759908   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:49.759918   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:51.547737   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:51.548204   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:51.590953   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:51.591095   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:51.611983   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:51.612088   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:51.626919   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:51.627007   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:51.640076   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:51.640145   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:51.651825   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:51.651899   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:51.666838   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:51.666911   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:51.676735   14536 logs.go:276] 0 containers: []
	W0320 17:17:51.676750   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:51.676808   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:51.687210   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:51.687227   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:51.687233   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:51.701788   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:51.701798   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:51.713715   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:51.713729   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:51.724627   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:51.724641   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:51.735920   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:51.735932   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:51.740446   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:51.740455   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:51.775305   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:51.775315   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:51.792269   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:51.792278   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:51.803931   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:51.803941   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:51.829506   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:51.829517   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:51.843041   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:51.843050   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:51.854861   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:51.854874   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:51.866484   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:51.866498   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:51.884038   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:51.884049   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:51.925799   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:51.925807   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:51.964579   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:51.964589   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:51.978609   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:51.978622   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:51.990107   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:51.990118   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:17:52.001662   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:52.001672   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:52.274547   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:54.514325   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:57.276729   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:57.277105   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:57.320102   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:57.320229   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:57.344302   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:57.344388   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:57.358304   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:57.358370   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:57.369758   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:57.369831   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:57.381176   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:57.381245   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:57.393889   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:57.393954   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:57.404138   14525 logs.go:276] 0 containers: []
	W0320 17:17:57.404148   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:57.404203   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:57.414601   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:57.414618   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:57.414624   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:57.430802   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:57.430812   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:57.442619   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:57.442628   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:57.454266   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:57.454277   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:57.477103   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:57.477112   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:57.513537   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:57.513546   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:57.534910   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:57.534921   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:57.552056   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:57.552067   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:57.564269   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:57.564283   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:57.576493   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:57.576502   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:57.611689   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:57.611700   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:57.626103   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:57.626114   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:57.651798   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:57.651810   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:57.665921   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:57.665931   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:57.676976   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:57.676986   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:57.681316   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:57.681325   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:57.692964   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:57.692976   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:00.213904   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:59.516883   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:59.517213   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:59.550284   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:17:59.550479   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:59.571634   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:17:59.571725   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:59.585253   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:17:59.585326   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:59.597659   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:17:59.597731   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:59.608627   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:17:59.608695   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:59.619456   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:17:59.619528   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:59.630299   14536 logs.go:276] 0 containers: []
	W0320 17:17:59.630310   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:59.630368   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:59.641597   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:17:59.641613   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:17:59.641618   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:17:59.681249   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:17:59.681265   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:17:59.699625   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:17:59.699636   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:17:59.710670   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:17:59.710682   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:59.723031   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:59.723042   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:59.766145   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:17:59.766157   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:17:59.780306   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:17:59.780315   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:17:59.794513   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:17:59.794525   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:17:59.806249   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:17:59.806260   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:17:59.817825   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:59.817836   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:59.841995   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:17:59.842003   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:17:59.855926   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:17:59.855937   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:17:59.872345   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:17:59.872358   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:17:59.890040   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:17:59.890049   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:17:59.902526   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:59.902538   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:59.907501   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:59.907508   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:59.947464   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:17:59.947477   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:17:59.959602   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:17:59.959613   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:17:59.974317   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:17:59.974327   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:05.215629   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:05.215959   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:05.244303   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:18:05.244429   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:05.261197   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:18:05.261279   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:05.274655   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:18:05.274723   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:05.286769   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:18:05.286844   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:05.297408   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:18:05.297469   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:05.307903   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:18:05.307967   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:05.317705   14525 logs.go:276] 0 containers: []
	W0320 17:18:05.317715   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:05.317766   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:05.332098   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:18:05.332114   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:18:05.332120   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:18:05.357471   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:18:05.357486   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:18:05.372214   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:05.372223   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:05.394966   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:18:05.394974   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:05.406511   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:05.406529   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:05.443964   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:18:05.443975   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:18:05.455806   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:18:05.455816   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:18:05.467281   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:05.467295   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:05.471476   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:18:05.471482   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:18:05.485088   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:18:05.485098   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:18:05.500177   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:18:05.500187   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:18:05.512711   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:18:05.512722   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:05.529309   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:18:05.529319   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:18:05.542148   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:05.542159   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:05.578261   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:18:05.578272   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:18:05.592138   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:18:05.592149   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:18:05.604049   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:18:05.604065   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:18:02.491795   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:08.117892   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:07.494152   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:07.494502   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:07.524035   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:18:07.524164   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:07.541438   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:18:07.541521   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:07.556722   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:18:07.556807   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:07.568289   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:18:07.568363   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:07.579493   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:18:07.579563   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:07.596692   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:18:07.596762   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:07.607180   14536 logs.go:276] 0 containers: []
	W0320 17:18:07.607194   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:07.607255   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:07.617542   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:18:07.617558   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:18:07.617563   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:18:07.628824   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:18:07.628840   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:18:07.640531   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:07.640544   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:07.664267   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:18:07.664274   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:18:07.680852   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:18:07.680862   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:07.692518   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:18:07.692529   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:18:07.714338   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:18:07.714350   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:18:07.726351   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:18:07.726364   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:18:07.737249   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:07.737261   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:07.741685   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:18:07.741694   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:18:07.755851   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:18:07.755865   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:18:07.792961   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:18:07.792974   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:18:07.804616   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:18:07.804627   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:18:07.815589   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:18:07.815606   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:07.827663   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:07.827673   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:07.862647   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:18:07.862658   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:18:07.877399   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:18:07.877408   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:18:07.891791   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:07.891803   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:07.931864   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:18:07.931880   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:18:10.447538   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:13.119891   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:13.120084   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:13.140515   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:18:13.140611   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:13.154935   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:18:13.155002   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:13.167207   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:18:13.167281   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:13.178165   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:18:13.178226   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:13.188789   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:18:13.188858   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:13.199664   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:18:13.199738   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:13.209901   14525 logs.go:276] 0 containers: []
	W0320 17:18:13.209912   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:13.209968   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:13.220755   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:18:13.220772   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:13.220777   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:13.260143   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:18:13.260158   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:18:13.274200   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:18:13.274212   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:18:13.285859   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:18:13.285872   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:18:13.298026   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:18:13.298036   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:18:13.322755   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:18:13.322765   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:18:13.337754   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:18:13.337765   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:18:13.357050   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:18:13.357061   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:13.371102   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:18:13.371113   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:18:13.383057   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:13.383067   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:13.406336   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:13.406347   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:13.442940   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:18:13.442951   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:18:13.457840   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:18:13.457851   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:18:13.469688   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:18:13.469699   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:13.487766   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:13.487775   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:13.492100   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:18:13.492106   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:18:13.503819   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:18:13.503829   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:18:16.015604   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:15.448552   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:15.448811   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:15.475706   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:18:15.475827   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:15.497220   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:18:15.497295   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:15.510172   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:18:15.510247   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:15.522001   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:18:15.522063   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:15.533853   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:18:15.533922   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:15.545936   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:18:15.546014   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:15.559510   14536 logs.go:276] 0 containers: []
	W0320 17:18:15.559521   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:15.559573   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:15.570857   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:18:15.570871   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:18:15.570877   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:18:15.582849   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:18:15.582863   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:18:15.595992   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:15.596002   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:15.635475   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:18:15.635484   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:18:15.653066   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:18:15.653077   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:18:15.690668   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:18:15.690680   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:18:15.705930   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:15.705945   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:15.710673   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:18:15.710681   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:15.723181   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:15.723191   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:15.746027   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:15.746038   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:15.783366   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:18:15.783379   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:18:15.795786   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:18:15.795798   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:18:15.813089   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:18:15.813098   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:18:15.828132   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:18:15.828147   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:18:15.839625   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:18:15.839639   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:15.852487   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:18:15.852501   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:18:15.867417   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:18:15.867428   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:18:15.879171   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:18:15.879181   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:18:15.897313   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:18:15.897324   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:18:21.017681   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:21.018101   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:21.050821   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:18:21.050941   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:21.069322   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:18:21.069422   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:21.084123   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:18:21.084201   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:21.095577   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:18:21.095641   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:21.105907   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:18:21.105976   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:21.116098   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:18:21.116162   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:21.125903   14525 logs.go:276] 0 containers: []
	W0320 17:18:21.125914   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:21.125970   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:21.136471   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:18:21.136489   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:18:21.136494   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:21.149815   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:18:21.149827   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:18:21.169747   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:18:21.169758   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:18:21.183798   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:18:21.183807   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:18:21.195236   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:18:21.195251   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:21.212765   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:18:21.212779   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:18:21.224606   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:18:21.224615   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:18:21.249063   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:18:21.249073   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:18:21.263320   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:18:21.263330   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:18:21.274653   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:21.274664   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:21.278725   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:21.278734   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:21.312521   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:18:21.312531   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:18:21.323857   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:18:21.323867   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:18:21.338499   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:18:21.338508   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:18:21.349964   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:21.349980   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:21.387021   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:18:21.387029   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:18:21.401892   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:21.401901   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:18.410906   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:23.927532   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:23.413091   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:23.413420   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:23.452761   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:18:23.452909   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:23.473291   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:18:23.473380   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:23.487857   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:18:23.487933   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:23.500337   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:18:23.500407   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:23.512241   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:18:23.512311   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:23.523102   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:18:23.523174   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:23.533757   14536 logs.go:276] 0 containers: []
	W0320 17:18:23.533769   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:23.533825   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:23.544613   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:18:23.544632   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:18:23.544637   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:18:23.568551   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:23.568562   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:23.574027   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:23.574038   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:23.612449   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:18:23.612460   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:18:23.624261   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:18:23.624274   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:18:23.636014   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:23.636024   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:23.659949   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:23.659957   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:23.701208   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:18:23.701228   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:18:23.713188   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:18:23.713201   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:18:23.725572   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:18:23.725586   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:18:23.743246   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:18:23.743257   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:18:23.755255   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:18:23.755265   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:18:23.769665   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:18:23.769679   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:18:23.808057   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:18:23.808068   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:18:23.822925   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:18:23.822936   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:18:23.837805   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:18:23.837816   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:18:23.849699   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:18:23.849710   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:18:23.862039   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:18:23.862051   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:23.875116   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:18:23.875127   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:26.388345   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:28.929669   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:28.930061   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:28.967176   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:18:28.967309   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:28.990228   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:18:28.990332   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:29.005427   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:18:29.005519   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:29.017926   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:18:29.017996   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:29.029402   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:18:29.029474   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:29.041389   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:18:29.041462   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:29.053418   14525 logs.go:276] 0 containers: []
	W0320 17:18:29.053433   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:29.053514   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:29.069019   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:18:29.069039   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:18:29.069044   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:29.086852   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:18:29.086862   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:18:29.098588   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:18:29.098599   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:18:29.109768   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:29.109778   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:29.146791   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:18:29.146815   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:18:29.171116   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:18:29.171129   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:18:29.182759   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:18:29.182772   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:18:29.193930   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:18:29.193941   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:29.206260   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:29.206279   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:29.210504   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:18:29.210511   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:18:29.224794   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:18:29.224808   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:18:29.240737   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:18:29.240750   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:18:29.257188   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:18:29.257198   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:18:29.271091   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:18:29.271105   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:18:29.282606   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:29.282614   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:29.304955   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:29.304963   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:29.342024   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:18:29.342036   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:18:31.390030   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:31.390155   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:31.402216   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:18:31.402295   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:31.415105   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:18:31.415170   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:31.427724   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:18:31.427803   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:31.438876   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:18:31.438937   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:31.449999   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:18:31.450067   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:31.461302   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:18:31.461370   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:31.472619   14536 logs.go:276] 0 containers: []
	W0320 17:18:31.472636   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:31.472693   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:31.485732   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:18:31.485752   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:18:31.485757   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:18:31.501089   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:18:31.501099   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:18:31.539142   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:18:31.539153   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:18:31.552497   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:18:31.552516   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:18:31.566958   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:18:31.566971   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:31.579607   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:31.579618   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:31.619897   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:31.619912   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:31.624370   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:31.624379   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:31.662379   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:18:31.662390   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:18:31.677027   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:18:31.677040   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:18:31.688573   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:18:31.688583   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:18:31.700590   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:18:31.700604   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:18:31.712440   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:18:31.712450   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:18:31.724218   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:18:31.724229   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:18:31.739486   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:31.739495   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:31.764136   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:18:31.764149   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:18:31.782432   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:18:31.782443   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:31.795038   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:18:31.795051   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:18:31.814354   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:18:31.814366   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:18:31.858491   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:34.328078   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:36.860558   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:36.860953   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:36.893857   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:18:36.894003   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:36.913150   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:18:36.913243   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:36.928640   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:18:36.928722   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:36.940655   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:18:36.940724   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:36.951158   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:18:36.951223   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:36.967133   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:18:36.967203   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:36.977431   14525 logs.go:276] 0 containers: []
	W0320 17:18:36.977440   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:36.977495   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:36.988454   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:18:36.988471   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:36.988476   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:37.011167   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:37.011175   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:37.048504   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:18:37.048519   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:18:37.066711   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:18:37.066728   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:18:37.098256   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:18:37.098272   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:37.120610   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:18:37.120623   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:37.132759   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:37.132772   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:37.136782   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:18:37.136792   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:18:37.150475   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:18:37.150486   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:18:37.163066   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:18:37.163077   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:18:37.174614   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:18:37.174624   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:18:37.198786   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:18:37.198797   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:18:37.214824   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:18:37.214834   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:18:37.230744   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:18:37.230753   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:18:37.242986   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:18:37.242995   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:18:37.254763   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:37.254774   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:37.293542   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:18:37.293552   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:18:39.810096   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:39.330227   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:39.330404   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:39.347161   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:18:39.347245   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:39.362542   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:18:39.362611   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:39.373450   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:18:39.373522   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:39.385304   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:18:39.385373   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:39.397221   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:18:39.397297   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:39.408637   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:18:39.408710   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:39.421993   14536 logs.go:276] 0 containers: []
	W0320 17:18:39.422005   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:39.422063   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:39.433310   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:18:39.433327   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:18:39.433332   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:18:39.445312   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:18:39.445327   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:18:39.456771   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:18:39.456784   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:18:39.475552   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:39.475563   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:39.479919   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:18:39.479927   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:18:39.516990   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:18:39.517001   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:18:39.529185   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:18:39.529195   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:18:39.546912   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:18:39.546925   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:18:39.560813   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:18:39.560824   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:18:39.580536   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:18:39.580547   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:18:39.592670   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:39.592682   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:39.615739   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:39.615747   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:39.655320   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:18:39.655330   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:39.667886   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:18:39.667899   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:18:39.679774   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:18:39.679786   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:39.692011   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:39.692021   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:39.732116   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:18:39.732131   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:18:39.748639   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:18:39.748650   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:18:39.763532   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:18:39.763545   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:18:44.811434   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:44.811911   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:44.847049   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:18:44.847175   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:44.864323   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:18:44.864415   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:44.878003   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:18:44.878076   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:44.889677   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:18:44.889751   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:44.900325   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:18:44.900394   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:44.910744   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:18:44.910804   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:44.921216   14525 logs.go:276] 0 containers: []
	W0320 17:18:44.921226   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:44.921289   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:44.931617   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:18:44.931634   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:44.931640   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:44.970147   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:44.970156   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:45.017655   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:18:45.017666   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:18:45.029104   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:18:45.029119   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:18:45.040530   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:18:45.040539   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:18:45.052076   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:45.052087   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:45.057007   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:18:45.057015   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:18:45.072510   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:18:45.072520   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:18:45.087694   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:18:45.087705   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:45.100228   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:18:45.100240   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:18:45.112695   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:18:45.112707   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:18:45.126920   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:18:45.126930   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:18:45.151553   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:18:45.151567   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:18:45.166180   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:18:45.166190   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:18:45.181133   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:18:45.181143   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:45.198535   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:18:45.198545   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:18:45.210016   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:45.210027   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:42.280835   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:47.734000   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:47.281646   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:47.281812   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:47.295660   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:18:47.295758   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:47.308269   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:18:47.308341   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:47.319023   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:18:47.319114   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:47.330207   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:18:47.330280   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:47.340586   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:18:47.340653   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:47.351464   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:18:47.351530   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:47.362725   14536 logs.go:276] 0 containers: []
	W0320 17:18:47.362736   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:47.362787   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:47.373424   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:18:47.373441   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:18:47.373446   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:18:47.388099   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:18:47.388111   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:18:47.399366   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:18:47.399381   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:18:47.410873   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:18:47.410883   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:18:47.424814   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:18:47.424827   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:18:47.436488   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:47.436500   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:47.459092   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:47.459099   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:47.495031   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:18:47.495043   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:18:47.509353   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:18:47.509364   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:18:47.525913   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:18:47.525922   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:18:47.551146   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:18:47.551156   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:18:47.567599   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:18:47.567611   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:47.580212   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:47.580223   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:47.584823   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:18:47.584831   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:18:47.598810   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:18:47.598824   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:18:47.634979   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:18:47.634990   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:47.646882   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:47.646897   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:47.688393   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:18:47.688402   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:18:47.702118   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:18:47.702129   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:18:50.214620   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:52.736121   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:52.736435   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:52.772612   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:18:52.772744   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:52.794838   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:18:52.794930   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:52.808425   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:18:52.808499   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:52.820115   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:18:52.820190   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:52.830603   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:18:52.830671   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:52.840956   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:18:52.841021   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:52.856779   14525 logs.go:276] 0 containers: []
	W0320 17:18:52.856790   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:52.856846   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:52.867394   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:18:52.867411   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:52.867416   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:52.903390   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:18:52.903401   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:18:52.929245   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:18:52.929255   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:18:52.940400   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:18:52.940409   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:18:52.952198   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:52.952208   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:52.974228   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:52.974235   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:53.010920   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:18:53.010927   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:18:53.026683   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:18:53.026692   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:18:53.038147   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:18:53.038159   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:18:53.049740   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:18:53.049749   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:18:53.061163   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:18:53.061172   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:53.074246   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:18:53.074259   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:53.091989   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:18:53.092001   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:18:53.103544   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:53.103554   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:53.107763   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:18:53.107770   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:18:53.129620   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:18:53.129630   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:18:53.143937   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:18:53.143950   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:18:55.661115   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:55.216858   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:55.217272   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:55.257922   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:18:55.258064   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:55.278867   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:18:55.278966   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:55.296120   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:18:55.296202   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:55.311362   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:18:55.311434   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:55.336040   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:18:55.336113   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:55.346483   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:18:55.346565   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:55.356396   14536 logs.go:276] 0 containers: []
	W0320 17:18:55.356408   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:55.356465   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:55.366884   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:18:55.366902   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:18:55.366908   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:18:55.380899   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:18:55.380911   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:18:55.395997   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:18:55.396006   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:18:55.412203   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:18:55.412214   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:18:55.428777   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:18:55.428793   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:55.441102   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:18:55.441113   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:18:55.480685   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:18:55.480694   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:18:55.491933   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:18:55.491946   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:18:55.503667   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:18:55.503676   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:18:55.515198   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:18:55.515212   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:18:55.526832   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:55.526845   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:55.531128   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:55.531136   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:55.567543   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:18:55.567554   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:18:55.579236   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:18:55.579245   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:18:55.590855   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:55.590866   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:55.614556   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:55.614564   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:55.655506   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:18:55.655513   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:18:55.670969   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:18:55.670979   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:18:55.682617   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:18:55.682629   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:19:00.663131   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:00.663377   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:19:00.687107   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:19:00.687220   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:19:00.703989   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:19:00.704054   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:19:00.716590   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:19:00.716658   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:19:00.727652   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:19:00.727722   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:19:00.740035   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:19:00.740099   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:19:00.754830   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:19:00.754904   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:19:00.769632   14525 logs.go:276] 0 containers: []
	W0320 17:19:00.769645   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:19:00.769700   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:19:00.780194   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:19:00.780212   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:19:00.780217   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:19:00.792129   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:19:00.792139   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:19:00.809567   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:19:00.809578   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:19:00.832371   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:19:00.832378   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:19:00.846926   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:19:00.846936   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:19:00.885595   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:19:00.885607   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:19:00.910653   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:19:00.910665   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:19:00.949029   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:19:00.949041   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:19:00.960772   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:19:00.960786   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:19:00.972583   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:19:00.972594   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:19:00.988198   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:19:00.988210   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:19:00.999814   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:19:00.999824   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:19:01.015020   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:19:01.015032   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:19:01.025834   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:19:01.025845   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:19:01.037508   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:19:01.037518   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:19:01.058234   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:19:01.058244   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:19:01.072378   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:19:01.072388   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:58.202206   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:03.577489   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:03.204298   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:03.204548   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:19:03.230524   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:19:03.230641   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:19:03.247680   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:19:03.247759   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:19:03.260974   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:19:03.261061   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:19:03.272500   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:19:03.272565   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:19:03.282632   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:19:03.282712   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:19:03.293575   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:19:03.293651   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:19:03.303740   14536 logs.go:276] 0 containers: []
	W0320 17:19:03.303756   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:19:03.303823   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:19:03.315172   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:19:03.315189   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:19:03.315195   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:19:03.330018   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:19:03.330028   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:19:03.342301   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:19:03.342314   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:19:03.355433   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:19:03.355445   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:19:03.367384   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:19:03.367394   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:19:03.378669   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:19:03.378681   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:19:03.401880   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:19:03.401892   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:19:03.415683   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:19:03.415694   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:19:03.452757   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:19:03.452767   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:19:03.466701   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:19:03.466714   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:19:03.503783   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:19:03.503794   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:19:03.515800   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:19:03.515812   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:19:03.527333   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:19:03.527344   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:19:03.531542   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:19:03.531550   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:19:03.548302   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:19:03.548312   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:19:03.566618   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:19:03.566629   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:19:03.578170   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:19:03.578180   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:19:03.591668   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:19:03.591678   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:19:03.631662   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:19:03.631669   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:19:06.147405   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:08.579483   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:08.579621   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:19:08.593782   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:19:08.593867   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:19:08.605266   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:19:08.605335   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:19:08.616112   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:19:08.616185   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:19:08.626555   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:19:08.626626   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:19:08.636968   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:19:08.637034   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:19:08.647602   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:19:08.647665   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:19:08.658079   14525 logs.go:276] 0 containers: []
	W0320 17:19:08.658088   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:19:08.658141   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:19:08.669278   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:19:08.669294   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:19:08.669299   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:19:08.686604   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:19:08.686614   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:19:08.699827   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:19:08.699839   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:19:08.711928   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:19:08.711938   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:19:08.726592   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:19:08.726605   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:19:08.738883   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:19:08.738897   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:19:08.754280   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:19:08.754291   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:19:08.765953   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:19:08.765965   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:19:08.806217   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:19:08.806227   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:19:08.810465   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:19:08.810474   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:19:08.834583   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:19:08.834592   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:19:08.845848   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:19:08.845858   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:19:08.867518   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:19:08.867525   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:19:08.902436   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:19:08.902447   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:19:08.916884   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:19:08.916894   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:19:08.930864   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:19:08.930875   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:19:08.943619   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:19:08.943631   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:19:11.457570   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:11.149964   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:11.150399   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:19:11.187604   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:19:11.187727   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:19:11.208786   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:19:11.208883   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:19:11.223959   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:19:11.224041   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:19:11.236589   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:19:11.236658   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:19:11.247825   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:19:11.247895   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:19:11.258707   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:19:11.258773   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:19:11.268644   14536 logs.go:276] 0 containers: []
	W0320 17:19:11.268657   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:19:11.268715   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:19:11.279472   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:19:11.279488   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:19:11.279494   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:19:11.313502   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:19:11.313515   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:19:11.327528   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:19:11.327539   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:19:11.339153   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:19:11.339167   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:19:11.351684   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:19:11.351695   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:19:11.356216   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:19:11.356223   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:19:11.368065   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:19:11.368076   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:19:11.383879   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:19:11.383893   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:19:11.395289   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:19:11.395300   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:19:11.409945   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:19:11.409955   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:19:11.430804   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:19:11.430815   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:19:11.442877   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:19:11.442888   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:19:11.454543   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:19:11.454554   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:19:11.466153   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:19:11.466163   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:19:11.488670   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:19:11.488680   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:19:11.526054   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:19:11.526067   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:19:11.540546   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:19:11.540556   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:19:11.554849   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:19:11.554860   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:19:11.572512   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:19:11.572523   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:19:16.459590   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:16.459672   14525 kubeadm.go:591] duration metric: took 4m3.751348083s to restartPrimaryControlPlane
	W0320 17:19:16.459744   14525 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0320 17:19:16.459775   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0320 17:19:14.114712   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:17.498914   14525 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.03916725s)
	I0320 17:19:17.498990   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0320 17:19:17.504302   14525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0320 17:19:17.507321   14525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0320 17:19:17.510128   14525 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0320 17:19:17.510135   14525 kubeadm.go:156] found existing configuration files:
	
	I0320 17:19:17.510161   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/admin.conf
	I0320 17:19:17.512772   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0320 17:19:17.512796   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0320 17:19:17.515627   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/kubelet.conf
	I0320 17:19:17.518841   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0320 17:19:17.518862   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0320 17:19:17.521977   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/controller-manager.conf
	I0320 17:19:17.524485   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0320 17:19:17.524510   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0320 17:19:17.527516   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/scheduler.conf
	I0320 17:19:17.530542   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0320 17:19:17.530567   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0320 17:19:17.533079   14525 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0320 17:19:17.549612   14525 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0320 17:19:17.549691   14525 kubeadm.go:309] [preflight] Running pre-flight checks
	I0320 17:19:17.598444   14525 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0320 17:19:17.598507   14525 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0320 17:19:17.598569   14525 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0320 17:19:17.650951   14525 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0320 17:19:17.655162   14525 out.go:204]   - Generating certificates and keys ...
	I0320 17:19:17.655195   14525 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0320 17:19:17.655226   14525 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0320 17:19:17.655263   14525 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0320 17:19:17.655297   14525 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0320 17:19:17.655353   14525 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0320 17:19:17.655382   14525 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0320 17:19:17.655418   14525 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0320 17:19:17.655456   14525 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0320 17:19:17.655592   14525 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0320 17:19:17.655649   14525 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0320 17:19:17.655682   14525 kubeadm.go:309] [certs] Using the existing "sa" key
	I0320 17:19:17.655714   14525 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0320 17:19:17.746178   14525 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0320 17:19:17.828140   14525 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0320 17:19:17.917561   14525 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0320 17:19:18.109431   14525 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0320 17:19:18.139948   14525 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0320 17:19:18.140289   14525 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0320 17:19:18.140310   14525 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0320 17:19:18.214313   14525 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0320 17:19:18.222462   14525 out.go:204]   - Booting up control plane ...
	I0320 17:19:18.222517   14525 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0320 17:19:18.222553   14525 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0320 17:19:18.222585   14525 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0320 17:19:18.222623   14525 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0320 17:19:18.222717   14525 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0320 17:19:19.116802   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:19.116934   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:19:19.128318   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:19:19.128389   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:19:19.140508   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:19:19.140576   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:19:19.152679   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:19:19.152766   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:19:19.165848   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:19:19.165920   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:19:19.177509   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:19:19.177593   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:19:19.190419   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:19:19.190493   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:19:19.201436   14536 logs.go:276] 0 containers: []
	W0320 17:19:19.201449   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:19:19.201514   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:19:19.214482   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:19:19.214501   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:19:19.214506   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:19:19.227801   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:19:19.227816   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:19:19.240045   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:19:19.240071   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:19:19.253556   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:19:19.253570   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:19:19.272648   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:19:19.272665   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:19:19.286386   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:19:19.286401   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:19:19.300998   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:19:19.301014   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:19:19.346699   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:19:19.346715   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:19:19.383874   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:19:19.383894   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:19:19.403293   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:19:19.403307   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:19:19.415129   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:19:19.415144   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:19:19.444855   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:19:19.444869   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:19:19.460775   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:19:19.460791   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:19:19.500642   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:19:19.500660   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:19:19.528670   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:19:19.528685   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:19:19.545840   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:19:19.545854   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:19:19.571748   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:19:19.571765   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:19:19.584894   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:19:19.584908   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:19:19.589565   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:19:19.589574   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:19:22.106018   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:22.720869   14525 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501795 seconds
	I0320 17:19:22.720983   14525 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0320 17:19:22.727398   14525 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0320 17:19:23.236734   14525 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0320 17:19:23.236837   14525 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-184000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0320 17:19:23.743003   14525 kubeadm.go:309] [bootstrap-token] Using token: 5mp7r1.8ewgf33bb9i5an87
	I0320 17:19:23.749864   14525 out.go:204]   - Configuring RBAC rules ...
	I0320 17:19:23.749931   14525 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0320 17:19:23.750003   14525 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0320 17:19:23.756367   14525 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0320 17:19:23.757336   14525 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0320 17:19:23.758425   14525 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0320 17:19:23.759162   14525 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0320 17:19:23.762575   14525 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0320 17:19:23.929849   14525 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0320 17:19:24.147474   14525 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0320 17:19:24.147823   14525 kubeadm.go:309] 
	I0320 17:19:24.147856   14525 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0320 17:19:24.147861   14525 kubeadm.go:309] 
	I0320 17:19:24.147896   14525 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0320 17:19:24.147899   14525 kubeadm.go:309] 
	I0320 17:19:24.147909   14525 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0320 17:19:24.147937   14525 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0320 17:19:24.147965   14525 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0320 17:19:24.147968   14525 kubeadm.go:309] 
	I0320 17:19:24.147994   14525 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0320 17:19:24.147998   14525 kubeadm.go:309] 
	I0320 17:19:24.148021   14525 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0320 17:19:24.148024   14525 kubeadm.go:309] 
	I0320 17:19:24.148050   14525 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0320 17:19:24.148176   14525 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0320 17:19:24.148215   14525 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0320 17:19:24.148219   14525 kubeadm.go:309] 
	I0320 17:19:24.148288   14525 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0320 17:19:24.148377   14525 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0320 17:19:24.148394   14525 kubeadm.go:309] 
	I0320 17:19:24.148457   14525 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5mp7r1.8ewgf33bb9i5an87 \
	I0320 17:19:24.148524   14525 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:185f26a629853f25fceceb0266b7da38a73499cb5a874e544b63fe7b511dff93 \
	I0320 17:19:24.148537   14525 kubeadm.go:309] 	--control-plane 
	I0320 17:19:24.148540   14525 kubeadm.go:309] 
	I0320 17:19:24.148584   14525 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0320 17:19:24.148587   14525 kubeadm.go:309] 
	I0320 17:19:24.148632   14525 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5mp7r1.8ewgf33bb9i5an87 \
	I0320 17:19:24.148704   14525 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:185f26a629853f25fceceb0266b7da38a73499cb5a874e544b63fe7b511dff93 
	I0320 17:19:24.148823   14525 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0320 17:19:24.148837   14525 cni.go:84] Creating CNI manager for ""
	I0320 17:19:24.148845   14525 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:19:24.153772   14525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0320 17:19:24.163795   14525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0320 17:19:24.167355   14525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0320 17:19:24.172805   14525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0320 17:19:24.172868   14525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0320 17:19:24.173119   14525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-184000 minikube.k8s.io/updated_at=2024_03_20T17_19_24_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=9b0fc36eb9ca8470d9076ea1f55dc7d08f560c77 minikube.k8s.io/name=stopped-upgrade-184000 minikube.k8s.io/primary=true
	I0320 17:19:24.176217   14525 ops.go:34] apiserver oom_adj: -16
	I0320 17:19:24.217650   14525 kubeadm.go:1107] duration metric: took 44.831666ms to wait for elevateKubeSystemPrivileges
	W0320 17:19:24.217867   14525 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0320 17:19:24.217872   14525 kubeadm.go:393] duration metric: took 4m11.52542775s to StartCluster
	I0320 17:19:24.217882   14525 settings.go:142] acquiring lock: {Name:mkcbd6581de6bc632d1ec3032ba4410c2461597a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:19:24.218014   14525 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:19:24.218650   14525 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/kubeconfig: {Name:mk87d9a95abf455216479b90714c3c5b079a6861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:19:24.218987   14525 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:19:24.223803   14525 out.go:177] * Verifying Kubernetes components...
	I0320 17:19:24.219016   14525 config.go:182] Loaded profile config "stopped-upgrade-184000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0320 17:19:24.219166   14525 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0320 17:19:24.229741   14525 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-184000"
	I0320 17:19:24.229785   14525 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-184000"
	I0320 17:19:24.229800   14525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:19:24.229808   14525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-184000"
	I0320 17:19:24.229898   14525 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-184000"
	W0320 17:19:24.229905   14525 addons.go:243] addon storage-provisioner should already be in state true
	I0320 17:19:24.229922   14525 host.go:66] Checking if "stopped-upgrade-184000" exists ...
	I0320 17:19:24.231045   14525 kapi.go:59] client config for stopped-upgrade-184000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/client.key", CAFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105922a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0320 17:19:24.231410   14525 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-184000"
	W0320 17:19:24.231416   14525 addons.go:243] addon default-storageclass should already be in state true
	I0320 17:19:24.231423   14525 host.go:66] Checking if "stopped-upgrade-184000" exists ...
	I0320 17:19:24.235734   14525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:19:24.239822   14525 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0320 17:19:24.239828   14525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0320 17:19:24.239834   14525 sshutil.go:53] new ssh client: &{IP:localhost Port:52272 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/id_rsa Username:docker}
	I0320 17:19:24.240559   14525 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0320 17:19:24.240565   14525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0320 17:19:24.240569   14525 sshutil.go:53] new ssh client: &{IP:localhost Port:52272 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/id_rsa Username:docker}
	I0320 17:19:24.305328   14525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0320 17:19:24.310186   14525 api_server.go:52] waiting for apiserver process to appear ...
	I0320 17:19:24.310225   14525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:19:24.313936   14525 api_server.go:72] duration metric: took 94.939875ms to wait for apiserver process to appear ...
	I0320 17:19:24.313944   14525 api_server.go:88] waiting for apiserver healthz status ...
	I0320 17:19:24.313951   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:24.334160   14525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0320 17:19:24.335906   14525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0320 17:19:27.106107   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:27.106338   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:19:27.130233   14536 logs.go:276] 2 containers: [989e89c1ff06 4e99a40742ea]
	I0320 17:19:27.130332   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:19:27.149158   14536 logs.go:276] 2 containers: [7ffad2be57b8 9fa8564030f4]
	I0320 17:19:27.149233   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:19:27.161646   14536 logs.go:276] 2 containers: [d4684acb1ff3 5bcf20efa75f]
	I0320 17:19:27.161727   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:19:27.173488   14536 logs.go:276] 2 containers: [182bf2305544 b64b6f297233]
	I0320 17:19:27.173561   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:19:27.185540   14536 logs.go:276] 2 containers: [ca05a42ee2ba 8322a1713029]
	I0320 17:19:27.185603   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:19:27.196457   14536 logs.go:276] 2 containers: [4eb284ba02b7 ebe1641c4b92]
	I0320 17:19:27.196522   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:19:27.206681   14536 logs.go:276] 0 containers: []
	W0320 17:19:27.206694   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:19:27.206748   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:19:29.314252   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:29.314326   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:27.217373   14536 logs.go:276] 2 containers: [2d07c179e4c6 21c908c7689b]
	I0320 17:19:27.217389   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:19:27.217394   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:19:27.251272   14536 logs.go:123] Gathering logs for storage-provisioner [21c908c7689b] ...
	I0320 17:19:27.251287   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 21c908c7689b"
	I0320 17:19:27.262986   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:19:27.262996   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:19:27.304831   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:19:27.304843   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:19:27.309107   14536 logs.go:123] Gathering logs for kube-apiserver [4e99a40742ea] ...
	I0320 17:19:27.309112   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4e99a40742ea"
	I0320 17:19:27.346136   14536 logs.go:123] Gathering logs for etcd [9fa8564030f4] ...
	I0320 17:19:27.346148   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9fa8564030f4"
	I0320 17:19:27.360593   14536 logs.go:123] Gathering logs for kube-controller-manager [4eb284ba02b7] ...
	I0320 17:19:27.360607   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 4eb284ba02b7"
	I0320 17:19:27.382463   14536 logs.go:123] Gathering logs for kube-controller-manager [ebe1641c4b92] ...
	I0320 17:19:27.382478   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ebe1641c4b92"
	I0320 17:19:27.393909   14536 logs.go:123] Gathering logs for storage-provisioner [2d07c179e4c6] ...
	I0320 17:19:27.393922   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d07c179e4c6"
	I0320 17:19:27.405842   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:19:27.405852   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:19:27.420025   14536 logs.go:123] Gathering logs for kube-apiserver [989e89c1ff06] ...
	I0320 17:19:27.420037   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 989e89c1ff06"
	I0320 17:19:27.434801   14536 logs.go:123] Gathering logs for etcd [7ffad2be57b8] ...
	I0320 17:19:27.434812   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7ffad2be57b8"
	I0320 17:19:27.448401   14536 logs.go:123] Gathering logs for coredns [d4684acb1ff3] ...
	I0320 17:19:27.448412   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d4684acb1ff3"
	I0320 17:19:27.459581   14536 logs.go:123] Gathering logs for coredns [5bcf20efa75f] ...
	I0320 17:19:27.459592   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5bcf20efa75f"
	I0320 17:19:27.470832   14536 logs.go:123] Gathering logs for kube-scheduler [182bf2305544] ...
	I0320 17:19:27.470844   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 182bf2305544"
	I0320 17:19:27.482286   14536 logs.go:123] Gathering logs for kube-scheduler [b64b6f297233] ...
	I0320 17:19:27.482297   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b64b6f297233"
	I0320 17:19:27.498823   14536 logs.go:123] Gathering logs for kube-proxy [ca05a42ee2ba] ...
	I0320 17:19:27.498834   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ca05a42ee2ba"
	I0320 17:19:27.515348   14536 logs.go:123] Gathering logs for kube-proxy [8322a1713029] ...
	I0320 17:19:27.515358   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 8322a1713029"
	I0320 17:19:27.526893   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:19:27.526907   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:19:30.049348   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:35.049906   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:35.049942   14536 kubeadm.go:591] duration metric: took 4m6.549813916s to restartPrimaryControlPlane
	W0320 17:19:35.049974   14536 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0320 17:19:35.049988   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0320 17:19:36.135240   14536 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.085279167s)
	I0320 17:19:36.135309   14536 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0320 17:19:36.140421   14536 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0320 17:19:36.143438   14536 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0320 17:19:36.146443   14536 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0320 17:19:36.146451   14536 kubeadm.go:156] found existing configuration files:
	
	I0320 17:19:36.146482   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/admin.conf
	I0320 17:19:36.149781   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0320 17:19:36.149818   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0320 17:19:36.152345   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/kubelet.conf
	I0320 17:19:36.154782   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0320 17:19:36.154805   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0320 17:19:36.157865   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/controller-manager.conf
	I0320 17:19:36.160914   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0320 17:19:36.160958   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0320 17:19:36.163736   14536 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/scheduler.conf
	I0320 17:19:36.166375   14536 kubeadm.go:162] "https://control-plane.minikube.internal:52377" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52377 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0320 17:19:36.166398   14536 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0320 17:19:36.170227   14536 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0320 17:19:36.186053   14536 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0320 17:19:36.186084   14536 kubeadm.go:309] [preflight] Running pre-flight checks
	I0320 17:19:36.232485   14536 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0320 17:19:36.232542   14536 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0320 17:19:36.232588   14536 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0320 17:19:36.284513   14536 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0320 17:19:34.315677   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:34.315718   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:36.287559   14536 out.go:204]   - Generating certificates and keys ...
	I0320 17:19:36.287595   14536 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0320 17:19:36.287631   14536 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0320 17:19:36.287701   14536 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0320 17:19:36.287734   14536 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0320 17:19:36.287770   14536 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0320 17:19:36.287800   14536 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0320 17:19:36.287837   14536 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0320 17:19:36.287875   14536 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0320 17:19:36.287919   14536 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0320 17:19:36.287961   14536 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0320 17:19:36.287982   14536 kubeadm.go:309] [certs] Using the existing "sa" key
	I0320 17:19:36.288009   14536 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0320 17:19:36.369937   14536 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0320 17:19:36.505255   14536 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0320 17:19:36.616076   14536 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0320 17:19:36.683609   14536 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0320 17:19:36.713184   14536 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0320 17:19:36.713423   14536 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0320 17:19:36.713476   14536 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0320 17:19:36.795412   14536 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0320 17:19:36.801089   14536 out.go:204]   - Booting up control plane ...
	I0320 17:19:36.801145   14536 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0320 17:19:36.801186   14536 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0320 17:19:36.801229   14536 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0320 17:19:36.801270   14536 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0320 17:19:36.801432   14536 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0320 17:19:39.315765   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:39.315837   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:41.302530   14536 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501718 seconds
	I0320 17:19:41.302630   14536 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0320 17:19:41.306320   14536 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0320 17:19:41.825812   14536 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0320 17:19:41.826209   14536 kubeadm.go:309] [mark-control-plane] Marking the node running-upgrade-120000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0320 17:19:42.329338   14536 kubeadm.go:309] [bootstrap-token] Using token: 8mr0af.bvjv90spdzjaujy3
	I0320 17:19:42.335328   14536 out.go:204]   - Configuring RBAC rules ...
	I0320 17:19:42.335381   14536 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0320 17:19:42.337586   14536 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0320 17:19:42.343377   14536 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0320 17:19:42.344456   14536 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0320 17:19:42.345201   14536 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0320 17:19:42.346104   14536 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0320 17:19:42.349006   14536 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0320 17:19:42.521436   14536 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0320 17:19:42.740060   14536 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0320 17:19:42.740498   14536 kubeadm.go:309] 
	I0320 17:19:42.740529   14536 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0320 17:19:42.740532   14536 kubeadm.go:309] 
	I0320 17:19:42.740571   14536 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0320 17:19:42.740574   14536 kubeadm.go:309] 
	I0320 17:19:42.740585   14536 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0320 17:19:42.740615   14536 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0320 17:19:42.740649   14536 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0320 17:19:42.740655   14536 kubeadm.go:309] 
	I0320 17:19:42.740687   14536 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0320 17:19:42.740691   14536 kubeadm.go:309] 
	I0320 17:19:42.740712   14536 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0320 17:19:42.740714   14536 kubeadm.go:309] 
	I0320 17:19:42.740738   14536 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0320 17:19:42.740788   14536 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0320 17:19:42.740831   14536 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0320 17:19:42.740838   14536 kubeadm.go:309] 
	I0320 17:19:42.740883   14536 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0320 17:19:42.740936   14536 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0320 17:19:42.740944   14536 kubeadm.go:309] 
	I0320 17:19:42.740990   14536 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 8mr0af.bvjv90spdzjaujy3 \
	I0320 17:19:42.741055   14536 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:185f26a629853f25fceceb0266b7da38a73499cb5a874e544b63fe7b511dff93 \
	I0320 17:19:42.741068   14536 kubeadm.go:309] 	--control-plane 
	I0320 17:19:42.741072   14536 kubeadm.go:309] 
	I0320 17:19:42.741119   14536 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0320 17:19:42.741123   14536 kubeadm.go:309] 
	I0320 17:19:42.741176   14536 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 8mr0af.bvjv90spdzjaujy3 \
	I0320 17:19:42.741233   14536 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:185f26a629853f25fceceb0266b7da38a73499cb5a874e544b63fe7b511dff93 
	I0320 17:19:42.741311   14536 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0320 17:19:42.741319   14536 cni.go:84] Creating CNI manager for ""
	I0320 17:19:42.741326   14536 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:19:42.744750   14536 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0320 17:19:42.747594   14536 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0320 17:19:42.750671   14536 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0320 17:19:42.755355   14536 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0320 17:19:42.755402   14536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0320 17:19:42.755474   14536 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes running-upgrade-120000 minikube.k8s.io/updated_at=2024_03_20T17_19_42_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=9b0fc36eb9ca8470d9076ea1f55dc7d08f560c77 minikube.k8s.io/name=running-upgrade-120000 minikube.k8s.io/primary=true
	I0320 17:19:42.789289   14536 ops.go:34] apiserver oom_adj: -16
	I0320 17:19:42.789371   14536 kubeadm.go:1107] duration metric: took 34.010583ms to wait for elevateKubeSystemPrivileges
	W0320 17:19:42.800780   14536 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0320 17:19:42.800792   14536 kubeadm.go:393] duration metric: took 4m14.315042s to StartCluster
	I0320 17:19:42.800802   14536 settings.go:142] acquiring lock: {Name:mkcbd6581de6bc632d1ec3032ba4410c2461597a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:19:42.800878   14536 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:19:42.801308   14536 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/kubeconfig: {Name:mk87d9a95abf455216479b90714c3c5b079a6861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:19:42.801511   14536 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:19:42.805616   14536 out.go:177] * Verifying Kubernetes components...
	I0320 17:19:42.801567   14536 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0320 17:19:42.801700   14536 config.go:182] Loaded profile config "running-upgrade-120000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0320 17:19:42.812636   14536 addons.go:69] Setting default-storageclass=true in profile "running-upgrade-120000"
	I0320 17:19:42.812656   14536 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "running-upgrade-120000"
	I0320 17:19:42.812694   14536 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:19:42.812673   14536 addons.go:69] Setting storage-provisioner=true in profile "running-upgrade-120000"
	I0320 17:19:42.812740   14536 addons.go:234] Setting addon storage-provisioner=true in "running-upgrade-120000"
	W0320 17:19:42.812747   14536 addons.go:243] addon storage-provisioner should already be in state true
	I0320 17:19:42.812759   14536 host.go:66] Checking if "running-upgrade-120000" exists ...
	I0320 17:19:42.814178   14536 kapi.go:59] client config for running-upgrade-120000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/running-upgrade-120000/client.key", CAFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x10427ea80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0320 17:19:42.814293   14536 addons.go:234] Setting addon default-storageclass=true in "running-upgrade-120000"
	W0320 17:19:42.814299   14536 addons.go:243] addon default-storageclass should already be in state true
	I0320 17:19:42.814307   14536 host.go:66] Checking if "running-upgrade-120000" exists ...
	I0320 17:19:42.818573   14536 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:19:44.315940   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:44.315971   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:42.821483   14536 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0320 17:19:42.821490   14536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0320 17:19:42.821496   14536 sshutil.go:53] new ssh client: &{IP:localhost Port:52309 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/running-upgrade-120000/id_rsa Username:docker}
	I0320 17:19:42.822257   14536 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0320 17:19:42.822262   14536 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0320 17:19:42.822266   14536 sshutil.go:53] new ssh client: &{IP:localhost Port:52309 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/running-upgrade-120000/id_rsa Username:docker}
	I0320 17:19:42.899831   14536 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0320 17:19:42.905040   14536 api_server.go:52] waiting for apiserver process to appear ...
	I0320 17:19:42.905086   14536 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:19:42.908680   14536 api_server.go:72] duration metric: took 107.164583ms to wait for apiserver process to appear ...
	I0320 17:19:42.908688   14536 api_server.go:88] waiting for apiserver healthz status ...
	I0320 17:19:42.908693   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:42.953011   14536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0320 17:19:42.955533   14536 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0320 17:19:49.316232   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:49.316268   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:47.910619   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:47.910665   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:54.316637   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:54.316676   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0320 17:19:54.704166   14525 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0320 17:19:54.709363   14525 out.go:177] * Enabled addons: storage-provisioner
	I0320 17:19:54.717332   14525 addons.go:505] duration metric: took 30.499510542s for enable addons: enabled=[storage-provisioner]
	I0320 17:19:52.911098   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:52.911117   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:59.317206   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:59.317244   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:57.911322   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:57.911378   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:04.317988   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:04.318017   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:02.911790   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:02.911855   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:09.318953   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:09.318982   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:07.912381   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:07.912405   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:12.913086   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:12.913123   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0320 17:20:13.292331   14536 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0320 17:20:13.296538   14536 out.go:177] * Enabled addons: storage-provisioner
	I0320 17:20:14.320207   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:14.320251   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:13.304435   14536 addons.go:505] duration metric: took 30.504060416s for enable addons: enabled=[storage-provisioner]
	I0320 17:20:19.321846   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:19.321877   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:17.913667   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:17.913775   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:24.323867   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:24.323983   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:24.343216   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:20:24.343309   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:24.355618   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:20:24.355694   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:24.365598   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:20:24.365667   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:24.376212   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:20:24.376280   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:24.386636   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:20:24.386707   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:24.397093   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:20:24.397156   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:24.407443   14525 logs.go:276] 0 containers: []
	W0320 17:20:24.407454   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:24.407511   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:24.419063   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:20:24.419079   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:20:24.419084   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:20:24.430486   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:24.430498   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:24.453847   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:24.453856   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:24.488378   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:24.488386   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:24.492298   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:20:24.492306   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:20:24.506920   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:20:24.506928   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:20:24.518039   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:20:24.518047   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:20:24.529447   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:20:24.529457   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:20:24.551026   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:20:24.551041   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:20:24.562253   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:24.562262   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:24.601234   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:20:24.601245   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:20:24.619630   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:20:24.619645   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:20:24.631393   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:20:24.631406   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:20:22.915042   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:22.915089   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:27.148990   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:27.916663   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:27.916706   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:32.151118   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:32.151299   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:32.164799   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:20:32.164883   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:32.176290   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:20:32.176368   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:32.186682   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:20:32.186751   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:32.197087   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:20:32.197153   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:32.207742   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:20:32.207810   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:32.218693   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:20:32.218754   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:32.228560   14525 logs.go:276] 0 containers: []
	W0320 17:20:32.228572   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:32.228630   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:32.239246   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:20:32.239261   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:20:32.239266   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:20:32.250910   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:20:32.250920   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:20:32.261702   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:20:32.261712   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:20:32.275963   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:20:32.275973   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:20:32.295718   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:20:32.295729   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:20:32.307132   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:32.307143   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:32.341533   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:32.341549   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:32.346436   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:20:32.346443   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:20:32.359982   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:20:32.359992   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:20:32.371416   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:32.371427   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:32.394532   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:32.394539   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:32.429198   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:20:32.429210   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:20:32.443812   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:20:32.443832   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:20:34.957242   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:32.918682   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:32.918720   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:39.959438   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:39.959636   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:39.982320   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:20:39.982394   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:39.995000   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:20:39.995064   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:40.007881   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:20:40.007946   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:40.026672   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:20:40.026737   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:40.037142   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:20:40.037206   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:40.048122   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:20:40.048188   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:40.058602   14525 logs.go:276] 0 containers: []
	W0320 17:20:40.058612   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:40.058668   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:40.069143   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:20:40.069159   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:40.069165   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:40.105040   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:40.105049   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:40.141918   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:20:40.141933   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:20:40.156325   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:20:40.156335   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:20:40.167999   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:20:40.168009   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:20:40.185597   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:20:40.185608   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:20:40.206652   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:20:40.206660   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:20:40.218215   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:40.218225   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:40.242268   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:40.242275   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:40.246311   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:20:40.246317   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:20:40.263078   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:20:40.263091   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:20:40.274900   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:20:40.274915   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:20:40.286797   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:20:40.286807   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:20:37.918998   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:37.919036   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:42.799913   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:42.920156   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:42.920349   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:42.933008   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:20:42.933079   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:42.945047   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:20:42.945132   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:42.958654   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:20:42.958728   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:42.988483   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:20:42.988557   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:43.005442   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:20:43.005513   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:43.016753   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:20:43.016828   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:43.027659   14536 logs.go:276] 0 containers: []
	W0320 17:20:43.027674   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:43.027737   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:43.039355   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:20:43.039370   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:20:43.039375   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:20:43.055848   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:20:43.055860   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:20:43.070904   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:20:43.070914   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:20:43.083251   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:20:43.083263   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:20:43.095214   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:43.095225   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:43.118521   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:20:43.118531   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:20:43.130320   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:43.130331   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:43.164616   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:43.164625   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:43.169127   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:20:43.169137   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:20:43.181683   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:20:43.181693   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:20:43.197519   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:20:43.197528   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:20:43.217336   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:43.217348   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:43.290656   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:20:43.290668   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:20:45.804798   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:47.801989   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:47.802139   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:47.817360   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:20:47.817443   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:47.829960   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:20:47.830028   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:47.840853   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:20:47.840916   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:47.851196   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:20:47.851254   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:47.862085   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:20:47.862144   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:47.872813   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:20:47.872886   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:47.883089   14525 logs.go:276] 0 containers: []
	W0320 17:20:47.883101   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:47.883163   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:47.894740   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:20:47.894756   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:47.894761   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:47.899424   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:20:47.899450   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:20:47.914132   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:20:47.914142   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:20:47.927418   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:20:47.927428   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:20:47.945487   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:20:47.945517   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:20:47.960864   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:20:47.960875   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:20:47.985171   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:20:47.985182   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:20:47.996926   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:47.996936   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:48.032394   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:48.032404   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:48.077594   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:20:48.077604   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:20:48.091781   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:20:48.091795   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:20:48.105218   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:20:48.105231   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:20:48.117345   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:48.117356   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:50.642863   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:50.807042   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:50.807226   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:50.820238   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:20:50.820316   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:50.831335   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:20:50.831409   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:50.842202   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:20:50.842265   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:50.852781   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:20:50.852850   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:50.863808   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:20:50.863880   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:50.874990   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:20:50.875055   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:50.885600   14536 logs.go:276] 0 containers: []
	W0320 17:20:50.885615   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:50.885673   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:50.896786   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:20:50.896805   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:50.896811   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:50.930096   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:20:50.930106   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:20:50.942613   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:20:50.942626   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:20:50.961093   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:20:50.961103   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:20:50.974164   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:50.974178   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:50.998545   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:20:50.998558   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:20:51.010336   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:51.010348   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:51.015065   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:51.015073   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:51.053218   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:20:51.053229   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:20:51.068143   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:20:51.068154   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:20:51.082734   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:20:51.082745   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:20:51.094763   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:20:51.094780   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:20:51.112609   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:20:51.112619   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:20:55.644083   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:55.644231   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:55.655176   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:20:55.655258   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:55.665792   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:20:55.665865   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:55.680930   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:20:55.680995   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:55.691315   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:20:55.691379   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:55.701675   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:20:55.701745   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:55.712237   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:20:55.712302   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:55.723276   14525 logs.go:276] 0 containers: []
	W0320 17:20:55.723286   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:55.723340   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:55.733318   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:20:55.733332   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:20:55.733337   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:20:55.745620   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:55.745644   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:55.768630   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:20:55.768638   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:20:55.780322   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:20:55.780331   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:20:55.794682   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:20:55.794693   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:20:55.808343   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:20:55.808352   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:20:55.823194   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:20:55.823203   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:20:55.834711   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:20:55.834722   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:20:55.852080   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:55.852092   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:55.886335   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:55.886343   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:55.890379   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:55.890386   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:55.933052   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:20:55.933063   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:20:55.950630   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:20:55.950640   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:20:53.626646   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:58.469240   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:58.628928   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:58.629102   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:58.651435   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:20:58.651533   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:58.665688   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:20:58.665759   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:58.678059   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:20:58.678118   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:58.689432   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:20:58.689498   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:58.700922   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:20:58.701015   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:58.712133   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:20:58.712199   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:58.722958   14536 logs.go:276] 0 containers: []
	W0320 17:20:58.722970   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:58.723030   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:58.733550   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:20:58.733568   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:58.733573   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:58.738320   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:20:58.738328   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:20:58.752821   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:20:58.752835   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:20:58.764761   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:20:58.764771   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:20:58.776873   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:20:58.776883   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:20:58.788535   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:58.788548   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:58.811569   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:58.811577   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:58.844660   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:58.844668   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:58.880802   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:20:58.880817   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:20:58.895480   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:20:58.895493   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:20:58.907817   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:20:58.907828   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:20:58.923263   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:20:58.923274   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:20:58.946971   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:20:58.946982   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:01.460730   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:03.471487   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:03.471812   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:03.506821   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:03.506945   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:03.527473   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:03.527569   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:03.543253   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:21:03.543321   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:03.555099   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:03.555164   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:03.565519   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:03.565586   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:03.576209   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:03.576275   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:03.587730   14525 logs.go:276] 0 containers: []
	W0320 17:21:03.587741   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:03.587801   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:03.603426   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:03.603443   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:03.603447   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:03.615337   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:03.615351   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:03.627070   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:03.627080   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:03.650653   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:03.650662   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:03.654844   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:03.654851   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:03.669209   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:03.669220   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:03.683359   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:03.683369   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:03.695152   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:03.695163   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:03.710248   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:03.710258   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:03.722023   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:03.722039   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:03.743511   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:03.743522   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:03.754989   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:03.754999   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:03.791426   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:03.791479   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:06.328697   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:06.462035   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:06.462287   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:06.481028   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:21:06.481110   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:06.494974   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:21:06.495043   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:06.506249   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:21:06.506317   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:06.516427   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:21:06.516489   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:06.526318   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:21:06.526412   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:06.537364   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:21:06.537434   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:06.547440   14536 logs.go:276] 0 containers: []
	W0320 17:21:06.547451   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:06.547510   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:06.558982   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:21:06.558997   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:06.559002   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:06.592423   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:21:06.592436   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:21:06.606810   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:21:06.606822   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:21:06.621259   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:21:06.621268   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:21:06.633251   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:21:06.633261   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:21:06.647335   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:21:06.647350   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:06.658701   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:06.658713   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:06.663110   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:06.663116   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:06.700158   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:21:06.700172   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:21:06.718586   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:21:06.718597   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:21:06.729806   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:21:06.729817   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:21:06.741356   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:21:06.741368   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:21:06.758956   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:06.758970   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:11.330780   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:11.330958   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:11.346563   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:11.346636   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:11.359044   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:11.359114   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:11.369868   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:21:11.369932   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:11.380534   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:11.380601   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:11.391040   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:11.391103   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:11.401847   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:11.401905   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:11.412537   14525 logs.go:276] 0 containers: []
	W0320 17:21:11.412547   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:11.412595   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:11.430049   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:11.430063   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:11.430068   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:11.441322   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:11.441332   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:11.453015   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:11.453026   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:11.469624   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:11.469635   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:11.481182   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:11.481193   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:11.498735   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:11.498745   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:11.510360   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:11.510370   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:11.545704   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:11.545715   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:11.549763   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:11.549771   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:11.585326   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:11.585335   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:11.619185   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:11.619196   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:11.633685   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:11.633694   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:11.658697   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:11.658707   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:09.285875   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:14.173070   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:14.287909   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:14.288062   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:14.302729   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:21:14.302808   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:14.314408   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:21:14.314474   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:14.324983   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:21:14.325055   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:14.334998   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:21:14.335059   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:14.345333   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:21:14.345402   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:14.366509   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:21:14.366580   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:14.376511   14536 logs.go:276] 0 containers: []
	W0320 17:21:14.376524   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:14.376576   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:14.387101   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:21:14.387116   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:21:14.387122   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:21:14.400883   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:21:14.400897   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:21:14.412968   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:21:14.412980   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:21:14.427720   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:21:14.427730   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:21:14.439752   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:14.439765   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:14.464533   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:21:14.464547   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:21:14.483138   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:21:14.483149   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:14.494278   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:14.494290   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:14.528840   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:14.528852   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:14.533720   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:14.533727   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:14.575693   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:21:14.575704   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:21:14.591924   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:21:14.591937   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:21:14.604028   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:21:14.604042   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:21:17.117244   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:19.174043   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:19.174156   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:19.189359   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:19.189438   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:19.201403   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:19.201464   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:19.212292   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:21:19.212358   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:19.222711   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:19.222775   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:19.233169   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:19.233230   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:19.243358   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:19.243419   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:19.253407   14525 logs.go:276] 0 containers: []
	W0320 17:21:19.253417   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:19.253471   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:19.263769   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:19.263784   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:19.263790   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:19.285510   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:19.285519   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:19.300419   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:19.300429   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:19.312441   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:19.312452   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:19.324015   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:19.324025   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:19.348183   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:19.348194   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:19.359439   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:19.359450   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:19.373789   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:19.373809   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:19.387497   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:19.387508   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:19.422130   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:19.422143   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:19.433502   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:19.433511   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:19.451176   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:19.451186   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:19.487073   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:19.487081   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:22.119390   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:22.119492   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:22.131423   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:21:22.131498   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:22.141944   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:21:22.142012   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:22.152247   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:21:22.152312   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:22.162367   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:21:22.162435   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:22.172714   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:21:22.172774   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:22.183540   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:21:22.183601   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:22.194233   14536 logs.go:276] 0 containers: []
	W0320 17:21:22.194243   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:22.194294   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:22.205350   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:21:22.205367   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:22.205372   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:21.993278   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:22.239755   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:22.239764   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:22.244882   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:21:22.244889   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:21:22.260981   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:21:22.260991   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:21:22.272130   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:21:22.272158   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:21:22.286887   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:21:22.286898   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:21:22.298454   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:22.298463   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:22.321121   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:22.321134   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:22.358230   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:21:22.358244   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:21:22.372317   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:21:22.372327   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:21:22.384497   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:21:22.384509   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:21:22.396275   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:21:22.396286   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:21:22.413799   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:21:22.413809   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:24.926838   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:26.993476   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:26.993601   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:27.008444   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:27.008518   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:27.020760   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:27.020832   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:27.031567   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:21:27.031646   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:27.042097   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:27.042159   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:27.052198   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:27.052272   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:27.062807   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:27.062879   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:27.073101   14525 logs.go:276] 0 containers: []
	W0320 17:21:27.073112   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:27.073170   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:27.083517   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:27.083533   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:27.083539   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:27.088098   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:21:27.088107   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:21:27.099433   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:27.099448   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:27.111434   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:21:27.111445   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:21:27.125416   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:27.125429   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:27.147376   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:27.147385   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:27.182279   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:27.182288   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:27.217507   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:27.217526   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:27.231736   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:27.231746   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:27.243428   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:27.243438   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:27.259716   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:27.259725   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:27.272069   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:27.272079   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:27.287090   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:27.287101   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:27.299462   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:27.299473   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:27.324465   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:27.324474   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:29.837848   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:29.928176   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:29.928320   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:29.949284   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:21:29.949375   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:29.965151   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:21:29.965228   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:29.977734   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:21:29.977804   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:29.988518   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:21:29.988589   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:30.003764   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:21:30.003835   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:30.015222   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:21:30.015289   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:30.025490   14536 logs.go:276] 0 containers: []
	W0320 17:21:30.025502   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:30.025559   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:30.036271   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:21:30.036290   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:21:30.036295   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:21:30.051079   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:21:30.051088   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:21:30.062743   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:21:30.062754   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:21:30.073679   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:30.073690   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:30.097945   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:21:30.097953   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:30.109052   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:30.109062   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:30.142714   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:30.142730   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:30.147180   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:30.147188   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:30.182013   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:21:30.182027   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:21:30.199977   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:21:30.199989   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:21:30.211644   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:21:30.211656   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:21:30.228768   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:21:30.228780   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:21:30.240908   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:21:30.240921   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:21:34.839808   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:34.839963   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:34.860973   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:34.861050   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:34.871722   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:34.871792   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:34.882295   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:21:34.882365   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:34.892824   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:34.892894   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:34.902859   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:34.902927   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:34.913670   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:34.913735   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:34.923673   14525 logs.go:276] 0 containers: []
	W0320 17:21:34.923683   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:34.923739   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:34.934091   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:34.934111   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:34.934116   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:34.953479   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:34.953492   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:34.965085   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:34.965098   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:34.990456   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:34.990467   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:35.004966   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:21:35.004978   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:21:35.016031   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:35.016043   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:35.027789   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:35.027801   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:35.039366   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:35.039377   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:35.043809   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:35.043818   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:35.077937   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:21:35.077948   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:21:35.089306   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:35.089316   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:35.103198   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:35.103210   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:35.121031   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:35.121043   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:35.132092   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:35.132102   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:35.167052   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:35.167062   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:32.757598   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:37.683778   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:37.759599   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:37.759758   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:37.776353   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:21:37.776432   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:37.788981   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:21:37.789044   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:37.799914   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:21:37.799979   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:37.810635   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:21:37.810701   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:37.821721   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:21:37.821784   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:37.835656   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:21:37.835720   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:37.845867   14536 logs.go:276] 0 containers: []
	W0320 17:21:37.845878   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:37.845932   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:37.855788   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:21:37.855803   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:21:37.855808   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:21:37.868434   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:21:37.868445   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:21:37.880594   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:21:37.880606   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:21:37.898518   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:21:37.898528   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:21:37.913620   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:21:37.913631   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:21:37.927033   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:37.927046   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:37.964912   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:21:37.964925   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:21:37.977020   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:21:37.977031   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:21:37.992564   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:21:37.992577   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:21:38.004206   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:38.004220   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:38.027474   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:21:38.027484   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:38.038954   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:38.038969   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:38.072775   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:38.072786   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:40.579207   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:42.685981   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:42.686134   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:42.704777   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:42.704872   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:42.724416   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:42.724497   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:42.736093   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:21:42.736163   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:42.746935   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:42.746999   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:42.763864   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:42.763928   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:42.774776   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:42.774836   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:42.785031   14525 logs.go:276] 0 containers: []
	W0320 17:21:42.785043   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:42.785103   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:42.796232   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:42.796252   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:21:42.796258   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:21:42.807644   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:42.807656   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:42.833788   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:42.833797   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:42.847930   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:42.847945   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:42.860407   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:42.860415   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:42.873356   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:42.873366   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:42.885048   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:42.885058   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:42.889719   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:21:42.889725   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:21:42.901159   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:42.901169   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:42.912816   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:42.912827   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:42.931886   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:42.931895   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:42.966583   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:42.966599   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:43.006377   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:43.006392   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:43.024298   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:43.024307   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:43.038893   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:43.038906   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:45.552800   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:45.581366   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:45.581616   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:45.613367   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:21:45.613479   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:45.633987   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:21:45.634078   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:45.647335   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:21:45.647413   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:45.658855   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:21:45.658921   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:45.673533   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:21:45.673602   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:45.689737   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:21:45.689811   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:45.699754   14536 logs.go:276] 0 containers: []
	W0320 17:21:45.699768   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:45.699817   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:45.710618   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:21:45.710633   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:21:45.710638   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:21:45.722017   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:21:45.722027   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:21:45.736398   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:21:45.736410   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:21:45.748439   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:21:45.748449   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:21:45.772389   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:45.772400   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:45.797158   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:45.797166   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:45.832109   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:21:45.832121   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:21:45.852040   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:21:45.852052   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:21:45.863965   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:21:45.863976   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:45.875244   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:21:45.875258   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:21:45.887445   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:45.887456   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:45.921724   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:45.921731   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:45.926264   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:21:45.926269   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:21:50.555036   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:50.555239   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:50.568429   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:50.568503   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:50.579047   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:50.579120   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:50.589802   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:21:50.589867   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:50.603899   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:50.603958   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:50.614475   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:50.614529   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:50.625269   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:50.625326   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:50.635176   14525 logs.go:276] 0 containers: []
	W0320 17:21:50.635189   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:50.635252   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:50.646106   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:50.646129   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:50.646134   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:50.664312   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:50.664328   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:50.678762   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:50.678772   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:50.694985   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:50.694994   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:50.730849   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:50.730859   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:50.765700   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:21:50.765711   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:21:50.778254   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:50.778264   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:50.790505   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:50.790516   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:50.806604   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:50.806614   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:50.832829   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:50.832838   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:50.856946   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:50.856955   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:50.861233   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:50.861244   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:50.875175   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:50.875183   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:50.889524   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:21:50.889536   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:21:50.900965   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:50.900981   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:48.441840   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:53.415340   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:53.443931   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:53.444056   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:53.462316   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:21:53.462409   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:53.476386   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:21:53.476458   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:53.488586   14536 logs.go:276] 2 containers: [a7ad224e3b14 b9570a8a8ad9]
	I0320 17:21:53.488661   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:53.499268   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:21:53.499338   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:53.516255   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:21:53.516322   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:53.526553   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:21:53.526612   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:53.536881   14536 logs.go:276] 0 containers: []
	W0320 17:21:53.536891   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:53.536944   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:53.547384   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:21:53.547398   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:21:53.547403   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:21:53.564601   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:21:53.564610   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:21:53.576351   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:21:53.576362   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:21:53.589991   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:21:53.590002   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:21:53.604718   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:53.604729   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:53.642166   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:21:53.642177   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:21:53.659917   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:21:53.659927   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:21:53.671405   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:21:53.671416   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:21:53.683414   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:21:53.683424   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:21:53.695304   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:53.695314   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:53.719785   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:53.719793   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:53.753914   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:53.753925   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:53.758359   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:21:53.758365   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:56.272166   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:58.417410   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:58.417554   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:58.433222   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:58.433313   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:58.451986   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:58.452057   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:58.462970   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:21:58.463040   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:58.473921   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:58.473988   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:58.484201   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:58.484265   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:58.494995   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:58.495059   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:58.505072   14525 logs.go:276] 0 containers: []
	W0320 17:21:58.505082   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:58.505135   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:58.515283   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:58.515298   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:58.515302   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:58.527518   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:58.527531   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:58.550966   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:21:58.550977   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:21:58.562757   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:21:58.562769   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:21:58.574720   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:58.574730   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:58.586941   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:58.586954   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:58.598823   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:58.598836   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:58.612974   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:58.612986   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:58.628479   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:58.628489   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:58.640256   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:58.640267   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:58.652800   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:58.652811   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:58.688156   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:58.688170   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:58.706659   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:58.706669   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:58.723915   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:58.723929   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:58.757855   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:58.757863   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:01.263816   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:01.274178   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:01.274292   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:01.290089   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:01.290169   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:01.315054   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:01.315115   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:01.325791   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:01.325864   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:01.337003   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:01.337063   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:01.347184   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:01.347250   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:01.357333   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:01.357399   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:01.367228   14536 logs.go:276] 0 containers: []
	W0320 17:22:01.367239   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:01.367298   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:01.381383   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:01.381402   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:01.381408   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:01.393261   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:01.393272   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:01.406910   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:01.406924   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:01.424954   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:01.424974   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:01.440580   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:01.440593   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:01.452011   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:01.452021   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:01.464012   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:01.464025   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:01.468516   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:01.468523   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:01.487559   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:01.487569   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:01.498866   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:01.498880   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:01.510109   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:01.510119   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:01.545255   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:01.545264   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:22:01.558741   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:01.558751   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:01.574006   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:01.574015   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:01.610263   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:01.610275   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:06.265957   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:06.266228   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:06.293443   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:22:06.293557   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:06.319449   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:22:06.319524   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:06.333726   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:22:06.333798   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:06.346513   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:22:06.346582   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:06.357168   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:22:06.357234   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:06.371734   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:22:06.371798   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:06.381707   14525 logs.go:276] 0 containers: []
	W0320 17:22:06.381721   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:06.381779   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:06.392407   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:22:06.392425   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:22:06.392430   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:22:06.403984   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:22:06.403997   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:22:06.415178   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:22:06.415187   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:22:06.432602   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:22:06.432611   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:06.444210   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:06.444219   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:06.468824   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:06.468832   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:06.504404   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:06.504413   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:06.508729   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:22:06.508738   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:22:06.522681   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:22:06.522693   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:22:06.534242   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:22:06.534256   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:22:06.546020   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:22:06.546031   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:22:06.557775   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:06.557785   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:06.593334   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:22:06.593344   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:22:06.607441   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:22:06.607451   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:22:06.619119   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:22:06.619130   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:22:04.136477   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:09.135969   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:09.138559   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:09.138693   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:09.151579   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:09.151655   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:09.162104   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:09.162171   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:09.172820   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:09.172884   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:09.185635   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:09.185698   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:09.196364   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:09.196424   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:09.207017   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:09.207079   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:09.222362   14536 logs.go:276] 0 containers: []
	W0320 17:22:09.222373   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:09.222426   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:09.232931   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:09.232947   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:09.232953   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:09.244502   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:09.244514   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:09.256128   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:09.256143   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:09.267927   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:09.267940   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:09.279539   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:09.279550   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:09.303579   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:09.303588   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:09.340999   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:09.341028   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:09.361782   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:09.361793   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:09.373858   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:09.373868   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:09.408787   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:09.408796   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:22:09.423266   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:09.423276   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:09.428025   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:09.428032   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:09.451342   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:09.451352   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:09.463150   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:09.463166   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:09.475160   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:09.475172   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:11.995751   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:14.137123   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:14.137344   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:14.158348   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:22:14.158441   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:14.173191   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:22:14.173259   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:14.187565   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:22:14.187638   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:14.197981   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:22:14.198047   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:14.208571   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:22:14.208636   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:14.223893   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:22:14.223962   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:14.234638   14525 logs.go:276] 0 containers: []
	W0320 17:22:14.234649   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:14.234704   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:14.248327   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:22:14.248344   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:22:14.248350   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:22:14.260056   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:14.260067   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:14.285305   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:22:14.285315   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:22:14.299587   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:22:14.299597   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:22:14.310818   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:22:14.310828   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:22:14.321923   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:22:14.321933   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:22:14.333846   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:22:14.333855   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:22:14.352785   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:22:14.352795   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:22:14.367751   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:22:14.367763   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:22:14.386655   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:22:14.386668   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:22:14.398151   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:22:14.398161   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:14.409445   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:14.409455   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:14.443547   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:14.443555   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:14.447816   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:14.447825   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:14.482637   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:22:14.482650   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:22:16.996149   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:16.996292   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:17.014394   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:17.014472   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:17.027047   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:17.027119   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:17.038681   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:17.038759   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:17.048746   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:17.048812   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:17.059556   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:17.059626   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:17.074148   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:17.074211   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:17.084228   14536 logs.go:276] 0 containers: []
	W0320 17:22:17.084241   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:17.084301   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:17.095137   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:17.095153   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:17.095159   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:17.106545   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:17.106555   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:17.117849   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:17.117859   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:17.135568   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:17.135579   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:22:17.149798   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:17.149807   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:17.174706   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:17.174714   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:16.996099   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:17.210910   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:17.211308   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:17.226538   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:17.226548   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:17.238457   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:17.238468   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:17.250195   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:17.250205   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:17.265024   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:17.265035   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:17.298575   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:17.298584   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:17.303290   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:17.303298   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:17.314580   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:17.314591   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:17.329584   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:17.329594   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:19.843134   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:21.998284   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:21.998530   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:22.025654   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:22:22.025736   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:22.041116   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:22:22.041193   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:22.053911   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:22:22.053981   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:22.064302   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:22:22.064378   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:22.077370   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:22:22.077443   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:22.087971   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:22:22.088040   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:22.102892   14525 logs.go:276] 0 containers: []
	W0320 17:22:22.102904   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:22.102963   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:22.113677   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:22:22.113694   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:22:22.113699   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:22:22.125501   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:22:22.125512   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:22:22.140253   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:22.140264   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:22.145072   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:22:22.145081   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:22:22.159118   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:22:22.159129   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:22:22.170669   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:22:22.170679   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:22:22.188300   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:22:22.188310   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:22.200579   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:22:22.200588   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:22:22.215935   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:22.215944   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:22.240769   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:22.240777   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:22.276431   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:22.276438   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:22.312063   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:22:22.312074   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:22:22.326421   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:22:22.326432   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:22:22.341393   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:22:22.341405   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:22:22.354003   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:22:22.354015   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:22:24.867945   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:24.845217   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:24.845395   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:24.863757   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:24.863834   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:24.878392   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:24.878461   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:24.890605   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:24.890676   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:24.901280   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:24.901345   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:24.911659   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:24.911729   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:24.921901   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:24.921968   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:24.931852   14536 logs.go:276] 0 containers: []
	W0320 17:22:24.931864   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:24.931925   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:24.942207   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:24.942228   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:24.942234   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:24.947035   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:24.947042   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:24.962150   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:24.962159   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:24.974250   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:24.974260   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:24.999992   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:25.000010   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:25.037449   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:25.037467   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:25.052757   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:25.052766   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:25.068138   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:25.068148   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:25.080705   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:25.080724   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:25.098727   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:25.098736   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:25.111435   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:25.111446   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:25.146272   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:25.146283   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:22:25.167845   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:25.167854   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:25.179697   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:25.179708   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:25.192284   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:25.192295   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:29.869985   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:29.870208   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:29.898453   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:22:29.898585   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:29.917265   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:22:29.917359   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:29.936630   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:22:29.936707   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:29.948974   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:22:29.949041   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:29.960315   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:22:29.960384   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:29.972822   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:22:29.972886   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:29.984059   14525 logs.go:276] 0 containers: []
	W0320 17:22:29.984074   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:29.984131   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:29.995250   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:22:29.995268   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:29.995275   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:30.030504   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:22:30.030515   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:22:30.046054   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:30.046067   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:30.050241   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:22:30.050250   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:22:30.063932   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:22:30.063941   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:22:30.075723   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:22:30.075735   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:22:30.087315   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:22:30.087326   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:22:30.105131   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:22:30.105143   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:22:30.119981   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:22:30.119993   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:22:30.130948   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:22:30.130958   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:22:30.143329   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:22:30.143342   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:22:30.155116   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:22:30.155124   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:22:30.166345   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:30.166357   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:30.199844   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:30.199858   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:30.224520   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:22:30.224530   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:27.711003   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:32.738219   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:32.713165   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:32.713356   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:32.733525   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:32.733595   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:32.753568   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:32.753643   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:32.765055   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:32.765122   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:32.775493   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:32.775549   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:32.791974   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:32.792037   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:32.803798   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:32.803866   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:32.814206   14536 logs.go:276] 0 containers: []
	W0320 17:22:32.814218   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:32.814268   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:32.824445   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:32.824461   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:32.824466   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:32.859075   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:32.859089   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:32.873139   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:32.873155   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:32.884541   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:32.884554   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:32.896112   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:32.896122   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:32.900356   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:32.900362   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:32.935605   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:32.935616   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:22:32.950948   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:32.950959   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:32.962524   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:32.962533   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:32.978031   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:32.978041   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:32.996714   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:32.996725   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:33.020281   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:33.020289   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:33.031405   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:33.031416   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:33.043268   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:33.043278   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:33.054821   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:33.054836   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:35.568267   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:37.740215   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:37.740387   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:37.757419   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:22:37.757505   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:37.778702   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:22:37.778777   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:37.790350   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:22:37.790428   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:37.800881   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:22:37.800945   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:37.811853   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:22:37.811920   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:37.822446   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:22:37.822512   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:37.832405   14525 logs.go:276] 0 containers: []
	W0320 17:22:37.832417   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:37.832473   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:37.842618   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:22:37.842639   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:22:37.842646   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:22:37.854446   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:22:37.854457   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:22:37.871968   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:22:37.871978   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:37.884221   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:37.884231   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:37.920351   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:22:37.920361   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:22:37.931972   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:22:37.931982   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:22:37.945390   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:22:37.945403   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:22:37.957857   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:37.957866   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:37.992471   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:22:37.992484   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:22:38.007225   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:22:38.007235   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:22:38.021262   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:22:38.021276   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:22:38.038578   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:38.038588   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:38.062972   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:38.062982   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:38.067760   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:22:38.067767   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:22:38.079127   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:22:38.079141   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:22:40.604586   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:40.570766   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:40.571014   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:40.603533   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:40.603631   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:40.620196   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:40.620276   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:40.632916   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:40.632987   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:40.644071   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:40.644141   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:40.654847   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:40.654906   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:40.665440   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:40.665506   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:40.675339   14536 logs.go:276] 0 containers: []
	W0320 17:22:40.675349   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:40.675395   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:40.688814   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:40.688836   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:40.688841   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:40.704856   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:40.704867   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:40.723501   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:40.723511   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:40.746346   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:40.746353   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:40.760746   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:40.760755   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:40.772837   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:40.772850   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:40.789142   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:40.789154   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:40.801316   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:40.801327   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:40.813028   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:40.813038   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:40.853320   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:40.853329   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:40.857919   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:40.857928   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:40.892810   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:40.892822   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:40.905656   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:40.905667   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:40.917864   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:40.917875   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:22:40.937744   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:40.937754   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:45.606725   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:45.607066   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:45.646469   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:22:45.646578   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:45.665588   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:22:45.665664   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:45.679946   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:22:45.680017   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:45.691860   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:22:45.691922   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:45.702329   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:22:45.702402   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:45.713096   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:22:45.713161   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:45.730106   14525 logs.go:276] 0 containers: []
	W0320 17:22:45.730120   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:45.730179   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:45.740747   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:22:45.740773   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:45.740777   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:45.745351   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:22:45.745358   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:22:45.761853   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:22:45.761862   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:22:45.776816   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:22:45.776826   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:22:45.788944   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:22:45.788956   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:22:45.809311   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:22:45.809320   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:22:45.820885   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:22:45.820906   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:45.832729   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:45.832738   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:45.857523   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:45.857532   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:45.893081   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:22:45.893091   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:22:45.907387   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:22:45.907398   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:22:45.920084   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:22:45.920097   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:22:45.932706   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:22:45.932718   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:22:45.945895   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:45.945905   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:45.981687   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:22:45.981698   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:22:43.452011   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:48.498735   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:48.454110   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:48.454262   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:48.468625   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:48.468705   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:48.480709   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:48.480774   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:48.491508   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:48.491578   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:48.502125   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:48.502180   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:48.511910   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:48.511972   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:48.526838   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:48.526908   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:48.536940   14536 logs.go:276] 0 containers: []
	W0320 17:22:48.536950   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:48.537004   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:48.551635   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:48.551653   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:48.551659   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:48.570968   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:48.570979   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:48.596360   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:48.596380   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:48.601363   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:48.601370   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:48.639083   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:48.639099   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:22:48.653744   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:48.653754   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:48.665460   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:48.665469   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:48.677584   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:48.677597   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:48.692194   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:48.692207   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:48.704128   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:48.704143   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:48.716108   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:48.716117   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:48.733914   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:48.733928   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:48.768847   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:48.768855   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:48.780791   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:48.780805   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:48.792453   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:48.792462   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:51.307031   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:53.500721   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:53.500948   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:53.521555   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:22:53.521645   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:53.536569   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:22:53.536641   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:53.548578   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:22:53.548649   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:53.559559   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:22:53.559631   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:53.570077   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:22:53.570144   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:53.580375   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:22:53.580438   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:53.594933   14525 logs.go:276] 0 containers: []
	W0320 17:22:53.594945   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:53.595002   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:53.605576   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:22:53.605591   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:22:53.605596   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:22:53.616853   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:53.616862   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:53.652577   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:53.652588   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:53.657019   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:22:53.657028   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:53.668052   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:22:53.668066   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:22:53.682097   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:22:53.682106   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:22:53.694014   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:22:53.694027   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:22:53.710100   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:22:53.710109   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:22:53.721760   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:22:53.721773   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:22:53.733663   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:53.733675   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:53.768122   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:22:53.768131   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:22:53.782671   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:22:53.782682   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:22:53.798362   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:22:53.798374   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:22:53.812888   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:22:53.812900   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:22:53.833812   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:53.833822   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:56.360135   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:56.309101   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:56.309214   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:56.321293   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:22:56.321368   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:56.331686   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:22:56.331744   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:56.342034   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:22:56.342105   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:56.352399   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:22:56.352456   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:56.362908   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:22:56.362972   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:56.373411   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:22:56.373477   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:56.386796   14536 logs.go:276] 0 containers: []
	W0320 17:22:56.386808   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:56.386861   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:56.399716   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:22:56.399734   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:22:56.399739   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:22:56.412181   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:22:56.412194   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:22:56.427693   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:22:56.427704   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:22:56.440944   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:22:56.440957   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:22:56.455920   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:22:56.455931   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:22:56.467466   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:56.467477   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:56.501873   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:56.501883   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:56.537112   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:22:56.537124   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:22:56.549378   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:56.549391   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:56.572295   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:56.572302   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:56.576948   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:22:56.576957   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:22:56.602407   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:22:56.602421   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:22:56.619734   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:22:56.619748   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:22:56.637244   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:22:56.637254   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:56.649463   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:22:56.649475   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:23:01.362103   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:01.362263   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:01.378594   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:23:01.378672   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:01.389389   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:23:01.389451   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:01.400232   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:23:01.400305   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:01.414012   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:23:01.414077   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:01.424506   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:23:01.424578   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:01.435091   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:23:01.435176   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:01.445448   14525 logs.go:276] 0 containers: []
	W0320 17:23:01.445459   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:01.445516   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:01.455734   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:23:01.455750   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:23:01.455754   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:23:01.467174   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:23:01.467187   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:23:01.478613   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:23:01.478622   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:23:01.490151   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:23:01.490162   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:23:01.502617   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:23:01.502628   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:23:01.519456   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:01.519466   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:01.524299   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:23:01.524307   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:23:01.538216   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:23:01.538229   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:23:01.549624   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:23:01.549635   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:23:01.564569   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:23:01.564580   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:23:01.579315   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:01.579326   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:01.603098   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:23:01.603109   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:01.614459   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:01.614470   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:01.649600   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:23:01.649613   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:23:01.666245   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:01.666256   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:59.165671   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:04.201634   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:04.167903   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:04.168182   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:04.201651   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:23:04.201753   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:04.220201   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:23:04.220283   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:04.234382   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:23:04.234472   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:04.245935   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:23:04.246008   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:04.256576   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:23:04.256644   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:04.267375   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:23:04.267436   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:04.278552   14536 logs.go:276] 0 containers: []
	W0320 17:23:04.278565   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:04.278625   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:04.289698   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:23:04.289716   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:23:04.289721   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:04.301403   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:04.301414   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:04.336183   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:23:04.336192   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:23:04.348162   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:23:04.348172   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:23:04.359522   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:23:04.359533   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:23:04.370901   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:23:04.370915   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:23:04.382535   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:23:04.382545   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:23:04.394204   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:23:04.394216   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:23:04.413236   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:23:04.413248   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:23:04.429958   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:23:04.429969   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:23:04.443763   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:23:04.443774   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:23:04.457039   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:04.457056   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:04.461799   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:04.461805   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:23:04.499295   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:04.499310   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:04.523085   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:23:04.523091   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:23:07.040193   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:09.203725   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:09.203895   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:09.223911   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:23:09.223988   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:09.238803   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:23:09.238890   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:09.250346   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:23:09.250413   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:09.260631   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:23:09.260709   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:09.271531   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:23:09.271598   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:09.284281   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:23:09.284343   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:09.296085   14525 logs.go:276] 0 containers: []
	W0320 17:23:09.296097   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:09.296152   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:09.311556   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:23:09.311577   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:09.311582   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:23:09.345797   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:23:09.345807   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:23:09.359297   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:23:09.359307   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:23:09.377487   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:23:09.377501   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:23:09.389010   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:09.389024   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:09.412945   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:23:09.412953   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:23:09.424458   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:23:09.424468   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:23:09.436020   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:23:09.436032   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:23:09.447912   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:23:09.447921   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:23:09.466710   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:23:09.466719   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:23:09.481211   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:23:09.481220   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:09.492973   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:23:09.492989   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:23:09.506404   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:09.506414   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:09.541381   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:09.541389   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:09.545465   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:23:09.545471   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:23:12.042300   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:12.042569   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:12.067628   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:23:12.067716   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:12.085791   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:23:12.085864   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:12.098721   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:23:12.098794   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:12.109861   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:23:12.109925   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:12.120198   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:23:12.120266   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:12.131280   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:23:12.131345   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:12.141295   14536 logs.go:276] 0 containers: []
	W0320 17:23:12.141306   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:12.141360   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:12.151290   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:23:12.151311   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:23:12.151315   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:23:12.162692   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:23:12.162701   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:23:12.174142   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:23:12.174152   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:23:12.191575   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:12.191584   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:12.061766   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:12.216159   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:23:12.216168   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:23:12.230386   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:23:12.230401   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:23:12.242052   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:12.242065   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:12.246438   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:12.246448   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:23:12.281941   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:23:12.281954   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:23:12.300114   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:23:12.300125   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:23:12.311823   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:23:12.311836   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:23:12.332118   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:12.332129   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:12.366208   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:23:12.366219   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:23:12.382930   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:23:12.382943   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:23:12.394419   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:23:12.394434   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:14.911301   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:17.063903   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:17.064141   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:17.081687   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:23:17.081780   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:17.095158   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:23:17.095229   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:17.107260   14525 logs.go:276] 4 containers: [1e63d2cf88f0 5e7f30c8db04 bbc57dcc3a9f 2796479991d6]
	I0320 17:23:17.107321   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:17.118445   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:23:17.118515   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:17.129359   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:23:17.129425   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:17.140107   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:23:17.140165   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:17.151098   14525 logs.go:276] 0 containers: []
	W0320 17:23:17.151108   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:17.151154   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:17.165032   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:23:17.165050   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:17.165055   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:17.200432   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:17.200440   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:17.204819   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:23:17.204827   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:23:17.218629   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:23:17.218639   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:23:17.236698   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:17.236707   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:23:17.270499   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:23:17.270512   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:23:17.282489   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:17.282502   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:17.306673   14525 logs.go:123] Gathering logs for coredns [1e63d2cf88f0] ...
	I0320 17:23:17.306683   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e63d2cf88f0"
	I0320 17:23:17.318777   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:23:17.318788   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:23:17.333256   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:23:17.333266   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:17.346006   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:23:17.346017   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:23:17.359873   14525 logs.go:123] Gathering logs for coredns [5e7f30c8db04] ...
	I0320 17:23:17.359883   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e7f30c8db04"
	I0320 17:23:17.370827   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:23:17.370841   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:23:17.382600   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:23:17.382609   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:23:17.398636   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:23:17.398645   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:23:19.916922   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:19.913451   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:19.913615   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:19.929165   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:23:19.929247   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:19.942152   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:23:19.942231   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:19.953437   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:23:19.953512   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:19.963996   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:23:19.964060   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:19.975185   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:23:19.975258   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:19.985707   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:23:19.985774   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:19.999069   14536 logs.go:276] 0 containers: []
	W0320 17:23:19.999080   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:19.999135   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:20.009284   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:23:20.009298   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:20.009303   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:20.044522   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:23:20.044537   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:23:20.073140   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:23:20.073152   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:23:20.085635   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:20.085647   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:20.090221   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:23:20.090228   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:23:20.106093   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:23:20.106104   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:23:20.121888   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:20.121901   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:20.145235   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:23:20.145243   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:20.156490   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:23:20.156504   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:23:20.170965   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:23:20.170975   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:23:20.182556   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:23:20.182569   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:23:20.195880   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:23:20.195889   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:23:20.207682   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:20.207698   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:23:20.242828   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:23:20.242838   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:23:20.255959   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:23:20.255969   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:23:24.918964   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:24.924346   14525 out.go:177] 
	W0320 17:23:24.927464   14525 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0320 17:23:24.927477   14525 out.go:239] * 
	W0320 17:23:24.928648   14525 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:23:24.939294   14525 out.go:177] 
	I0320 17:23:22.774074   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:27.776079   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:27.776187   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:27.794406   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:23:27.794505   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:27.805069   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:23:27.805134   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:27.815902   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:23:27.815971   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:27.830765   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:23:27.830824   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:27.842888   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:23:27.842959   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:27.853445   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:23:27.853503   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:27.863761   14536 logs.go:276] 0 containers: []
	W0320 17:23:27.863770   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:27.863822   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:27.873998   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:23:27.874020   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:23:27.874025   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:23:27.886499   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:23:27.886509   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:23:27.903192   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:23:27.903204   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:23:27.917495   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:23:27.917507   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:23:27.931536   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:23:27.931550   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:23:27.949517   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:27.949528   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:27.954297   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:27.954305   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:23:27.990406   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:23:27.990417   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:23:28.004438   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:23:28.004447   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:23:28.015739   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:23:28.015752   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:23:28.027149   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:23:28.027160   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:23:28.042921   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:28.042934   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:28.065671   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:28.065678   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:28.098908   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:23:28.098917   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:28.110243   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:23:28.110255   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:23:30.623916   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:35.626012   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:35.626197   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:35.647437   14536 logs.go:276] 1 containers: [f64a8e99608b]
	I0320 17:23:35.647520   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:35.661139   14536 logs.go:276] 1 containers: [7443330ef427]
	I0320 17:23:35.661198   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:35.673368   14536 logs.go:276] 4 containers: [e752c102f4d6 1521278fdbef a7ad224e3b14 b9570a8a8ad9]
	I0320 17:23:35.673435   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:35.683517   14536 logs.go:276] 1 containers: [a89aa0467f21]
	I0320 17:23:35.683574   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:35.694212   14536 logs.go:276] 1 containers: [86ac7ef755ac]
	I0320 17:23:35.694282   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:35.704848   14536 logs.go:276] 1 containers: [c8393305b944]
	I0320 17:23:35.704914   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:35.714369   14536 logs.go:276] 0 containers: []
	W0320 17:23:35.714380   14536 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:35.714434   14536 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:35.724473   14536 logs.go:276] 1 containers: [59187ac0d704]
	I0320 17:23:35.724490   14536 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:35.724494   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:23:35.759951   14536 logs.go:123] Gathering logs for coredns [e752c102f4d6] ...
	I0320 17:23:35.759962   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e752c102f4d6"
	I0320 17:23:35.772866   14536 logs.go:123] Gathering logs for kube-apiserver [f64a8e99608b] ...
	I0320 17:23:35.772877   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f64a8e99608b"
	I0320 17:23:35.788197   14536 logs.go:123] Gathering logs for etcd [7443330ef427] ...
	I0320 17:23:35.788208   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7443330ef427"
	I0320 17:23:35.802368   14536 logs.go:123] Gathering logs for coredns [a7ad224e3b14] ...
	I0320 17:23:35.802381   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a7ad224e3b14"
	I0320 17:23:35.813210   14536 logs.go:123] Gathering logs for storage-provisioner [59187ac0d704] ...
	I0320 17:23:35.813224   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 59187ac0d704"
	I0320 17:23:35.824712   14536 logs.go:123] Gathering logs for kube-proxy [86ac7ef755ac] ...
	I0320 17:23:35.824725   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 86ac7ef755ac"
	I0320 17:23:35.837690   14536 logs.go:123] Gathering logs for kube-controller-manager [c8393305b944] ...
	I0320 17:23:35.837702   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c8393305b944"
	I0320 17:23:35.855301   14536 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:35.855311   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:35.879336   14536 logs.go:123] Gathering logs for container status ...
	I0320 17:23:35.879346   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:35.891319   14536 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:35.891330   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:35.895767   14536 logs.go:123] Gathering logs for coredns [1521278fdbef] ...
	I0320 17:23:35.895774   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1521278fdbef"
	I0320 17:23:35.907260   14536 logs.go:123] Gathering logs for coredns [b9570a8a8ad9] ...
	I0320 17:23:35.907272   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9570a8a8ad9"
	I0320 17:23:35.919228   14536 logs.go:123] Gathering logs for kube-scheduler [a89aa0467f21] ...
	I0320 17:23:35.919238   14536 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 a89aa0467f21"
	I0320 17:23:35.934109   14536 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:35.934120   14536 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:38.471992   14536 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:43.474029   14536 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:43.476934   14536 out.go:177] 
	W0320 17:23:43.481372   14536 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0320 17:23:43.481379   14536 out.go:239] * 
	W0320 17:23:43.481827   14536 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:23:43.494327   14536 out.go:177] 
	
	
	==> Docker <==
	-- Journal begins at Thu 2024-03-21 00:14:19 UTC, ends at Thu 2024-03-21 00:23:59 UTC. --
	Mar 21 00:23:44 running-upgrade-120000 dockerd[4396]: time="2024-03-21T00:23:44.248227012Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/401190c3abaa21bc8264332c99c14f348d4538e10e05ef6ba02bf1d7a5b3c03c pid=20308 runtime=io.containerd.runc.v2
	Mar 21 00:23:44 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:44Z" level=error msg="ContainerStats resp: {0x40004fd140 linux}"
	Mar 21 00:23:44 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:44Z" level=error msg="ContainerStats resp: {0x40004fd600 linux}"
	Mar 21 00:23:44 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:44Z" level=error msg="ContainerStats resp: {0x40001b4980 linux}"
	Mar 21 00:23:45 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:45Z" level=error msg="ContainerStats resp: {0x40001f01c0 linux}"
	Mar 21 00:23:46 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:46Z" level=error msg="ContainerStats resp: {0x40008e8b00 linux}"
	Mar 21 00:23:46 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:46Z" level=error msg="ContainerStats resp: {0x400081a400 linux}"
	Mar 21 00:23:46 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:46Z" level=error msg="ContainerStats resp: {0x400081a940 linux}"
	Mar 21 00:23:46 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:46Z" level=error msg="ContainerStats resp: {0x400081ad00 linux}"
	Mar 21 00:23:46 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:46Z" level=error msg="ContainerStats resp: {0x400081b480 linux}"
	Mar 21 00:23:46 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:46Z" level=error msg="ContainerStats resp: {0x400081bd80 linux}"
	Mar 21 00:23:46 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:46Z" level=error msg="ContainerStats resp: {0x40008e9d80 linux}"
	Mar 21 00:23:47 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:47Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 21 00:23:52 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:52Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 21 00:23:56 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:56Z" level=error msg="ContainerStats resp: {0x40001f04c0 linux}"
	Mar 21 00:23:56 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:56Z" level=error msg="ContainerStats resp: {0x400081a9c0 linux}"
	Mar 21 00:23:57 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:57Z" level=error msg="ContainerStats resp: {0x400081be80 linux}"
	Mar 21 00:23:57 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:57Z" level=info msg="Using CNI configuration file /etc/cni/net.d/1-k8s.conflist"
	Mar 21 00:23:58 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:58Z" level=error msg="ContainerStats resp: {0x40003a19c0 linux}"
	Mar 21 00:23:58 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:58Z" level=error msg="ContainerStats resp: {0x4000996040 linux}"
	Mar 21 00:23:58 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:58Z" level=error msg="ContainerStats resp: {0x40004fc7c0 linux}"
	Mar 21 00:23:58 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:58Z" level=error msg="ContainerStats resp: {0x40004fd280 linux}"
	Mar 21 00:23:58 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:58Z" level=error msg="ContainerStats resp: {0x40009964c0 linux}"
	Mar 21 00:23:58 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:58Z" level=error msg="ContainerStats resp: {0x4000996040 linux}"
	Mar 21 00:23:58 running-upgrade-120000 cri-dockerd[4114]: time="2024-03-21T00:23:58Z" level=error msg="ContainerStats resp: {0x4000997840 linux}"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	401190c3abaa2       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   73c287193d32c
	0592e710635c3       edaa71f2aee88       15 seconds ago      Running             coredns                   2                   e9dc2eaf1b397
	e752c102f4d6a       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   e9dc2eaf1b397
	1521278fdbefa       edaa71f2aee88       2 minutes ago       Exited              coredns                   1                   73c287193d32c
	86ac7ef755acc       fcbd620bbac08       4 minutes ago       Running             kube-proxy                0                   30a93e7f9a273
	59187ac0d704c       ba04bb24b9575       4 minutes ago       Running             storage-provisioner       0                   a4a32b0ad45b8
	c8393305b944f       f61bbe9259d7c       4 minutes ago       Running             kube-controller-manager   0                   fbbaa97c580ed
	7443330ef4270       a9a710bb96df0       4 minutes ago       Running             etcd                      0                   0d61d1f66fcd0
	f64a8e99608be       7c5896a75862a       4 minutes ago       Running             kube-apiserver            0                   9dbcd52cf87c5
	a89aa0467f212       000c19baf6bba       4 minutes ago       Running             kube-scheduler            0                   07ab5261c0067
	
	
	==> coredns [0592e710635c] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 1930728927252429659.1845378993054914409. HINFO: read udp 10.244.0.3:54417->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1930728927252429659.1845378993054914409. HINFO: read udp 10.244.0.3:58634->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 1930728927252429659.1845378993054914409. HINFO: read udp 10.244.0.3:33304->10.0.2.3:53: i/o timeout
	
	
	==> coredns [1521278fdbef] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 2181324867886567691.6170775569997101965. HINFO: read udp 10.244.0.2:55643->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2181324867886567691.6170775569997101965. HINFO: read udp 10.244.0.2:53084->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2181324867886567691.6170775569997101965. HINFO: read udp 10.244.0.2:33955->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2181324867886567691.6170775569997101965. HINFO: read udp 10.244.0.2:42371->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2181324867886567691.6170775569997101965. HINFO: read udp 10.244.0.2:51668->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2181324867886567691.6170775569997101965. HINFO: read udp 10.244.0.2:46617->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2181324867886567691.6170775569997101965. HINFO: read udp 10.244.0.2:36435->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2181324867886567691.6170775569997101965. HINFO: read udp 10.244.0.2:42456->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2181324867886567691.6170775569997101965. HINFO: read udp 10.244.0.2:39622->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 2181324867886567691.6170775569997101965. HINFO: read udp 10.244.0.2:50268->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [401190c3abaa] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 4883278631024128646.5563663268122747243. HINFO: read udp 10.244.0.2:57460->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4883278631024128646.5563663268122747243. HINFO: read udp 10.244.0.2:41263->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 4883278631024128646.5563663268122747243. HINFO: read udp 10.244.0.2:40355->10.0.2.3:53: i/o timeout
	
	
	==> coredns [e752c102f4d6] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
	CoreDNS-1.8.6
	linux/arm64, go1.17.1, 13a9191
	[ERROR] plugin/errors: 2 532847719247105377.731877221046174569. HINFO: read udp 10.244.0.3:50437->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 532847719247105377.731877221046174569. HINFO: read udp 10.244.0.3:43688->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 532847719247105377.731877221046174569. HINFO: read udp 10.244.0.3:36172->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 532847719247105377.731877221046174569. HINFO: read udp 10.244.0.3:60670->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 532847719247105377.731877221046174569. HINFO: read udp 10.244.0.3:38447->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 532847719247105377.731877221046174569. HINFO: read udp 10.244.0.3:60757->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 532847719247105377.731877221046174569. HINFO: read udp 10.244.0.3:34731->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 532847719247105377.731877221046174569. HINFO: read udp 10.244.0.3:39478->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 532847719247105377.731877221046174569. HINFO: read udp 10.244.0.3:59262->10.0.2.3:53: i/o timeout
	[ERROR] plugin/errors: 2 532847719247105377.731877221046174569. HINFO: read udp 10.244.0.3:46024->10.0.2.3:53: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               running-upgrade-120000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=running-upgrade-120000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=9b0fc36eb9ca8470d9076ea1f55dc7d08f560c77
	                    minikube.k8s.io/name=running-upgrade-120000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_20T17_19_42_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 21 Mar 2024 00:19:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  running-upgrade-120000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 21 Mar 2024 00:23:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 21 Mar 2024 00:19:42 +0000   Thu, 21 Mar 2024 00:19:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 21 Mar 2024 00:19:42 +0000   Thu, 21 Mar 2024 00:19:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 21 Mar 2024 00:19:42 +0000   Thu, 21 Mar 2024 00:19:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 21 Mar 2024 00:19:42 +0000   Thu, 21 Mar 2024 00:19:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  10.0.2.15
	  Hostname:    running-upgrade-120000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784760Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             2148820Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8ab14752110433687426effd4ff4513
	  System UUID:                b8ab14752110433687426effd4ff4513
	  Boot ID:                    65fa2022-ac56-4123-a45b-a862c1e04a35
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://20.10.16
	  Kubelet Version:            v1.24.1
	  Kube-Proxy Version:         v1.24.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-55469                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 coredns-6d4b75cb6d-t2kcw                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m4s
	  kube-system                 etcd-running-upgrade-120000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-apiserver-running-upgrade-120000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-controller-manager-running-upgrade-120000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	  kube-system                 kube-proxy-26xdv                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	  kube-system                 kube-scheduler-running-upgrade-120000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m2s                   kube-proxy       
	  Normal  Starting                 4m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m23s (x5 over 4m23s)  kubelet          Node running-upgrade-120000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-120000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m23s (x4 over 4m23s)  kubelet          Node running-upgrade-120000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m18s                  kubelet          Node running-upgrade-120000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s                  kubelet          Node running-upgrade-120000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s                  kubelet          Node running-upgrade-120000 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m18s                  kubelet          Node running-upgrade-120000 status is now: NodeReady
	  Normal  RegisteredNode           4m5s                   node-controller  Node running-upgrade-120000 event: Registered Node running-upgrade-120000 in Controller
	
	
	==> dmesg <==
	[  +0.086791] systemd-fstab-generator[887]: Ignoring "noauto" for root device
	[  +0.077726] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[  +1.136312] kauditd_printk_skb: 53 callbacks suppressed
	[  +0.091570] systemd-fstab-generator[1048]: Ignoring "noauto" for root device
	[  +0.080282] systemd-fstab-generator[1059]: Ignoring "noauto" for root device
	[  +2.122677] systemd-fstab-generator[1285]: Ignoring "noauto" for root device
	[ +14.155203] systemd-fstab-generator[1970]: Ignoring "noauto" for root device
	[Mar21 00:15] kauditd_printk_skb: 47 callbacks suppressed
	[  +1.354321] systemd-fstab-generator[2715]: Ignoring "noauto" for root device
	[  +0.230087] systemd-fstab-generator[2756]: Ignoring "noauto" for root device
	[  +0.110221] systemd-fstab-generator[2767]: Ignoring "noauto" for root device
	[  +0.107032] systemd-fstab-generator[2780]: Ignoring "noauto" for root device
	[  +5.128177] kauditd_printk_skb: 14 callbacks suppressed
	[ +11.512945] systemd-fstab-generator[4071]: Ignoring "noauto" for root device
	[  +0.091483] systemd-fstab-generator[4082]: Ignoring "noauto" for root device
	[  +0.090504] systemd-fstab-generator[4093]: Ignoring "noauto" for root device
	[  +0.103645] systemd-fstab-generator[4107]: Ignoring "noauto" for root device
	[  +2.304016] systemd-fstab-generator[4261]: Ignoring "noauto" for root device
	[  +4.099355] systemd-fstab-generator[4754]: Ignoring "noauto" for root device
	[  +1.027633] systemd-fstab-generator[4880]: Ignoring "noauto" for root device
	[  +6.647258] kauditd_printk_skb: 80 callbacks suppressed
	[ +10.142823] kauditd_printk_skb: 3 callbacks suppressed
	[Mar21 00:19] systemd-fstab-generator[13599]: Ignoring "noauto" for root device
	[  +5.640556] systemd-fstab-generator[14213]: Ignoring "noauto" for root device
	[  +0.458154] systemd-fstab-generator[14346]: Ignoring "noauto" for root device
	
	
	==> etcd [7443330ef427] <==
	{"level":"info","ts":"2024-03-21T00:19:38.460Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"f074a195de705325","initial-advertise-peer-urls":["https://10.0.2.15:2380"],"listen-peer-urls":["https://10.0.2.15:2380"],"advertise-client-urls":["https://10.0.2.15:2379"],"listen-client-urls":["https://10.0.2.15:2379","https://127.0.0.1:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-21T00:19:38.460Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-21T00:19:38.462Z","caller":"etcdserver/server.go:736","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f074a195de705325","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-03-21T00:19:38.470Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-21T00:19:38.470Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"10.0.2.15:2380"}
	{"level":"info","ts":"2024-03-21T00:19:38.470Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 switched to configuration voters=(17326651331455243045)"}
	{"level":"info","ts":"2024-03-21T00:19:38.470Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","added-peer-id":"f074a195de705325","added-peer-peer-urls":["https://10.0.2.15:2380"]}
	{"level":"info","ts":"2024-03-21T00:19:38.878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-21T00:19:38.878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-21T00:19:38.878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgPreVoteResp from f074a195de705325 at term 1"}
	{"level":"info","ts":"2024-03-21T00:19:38.878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became candidate at term 2"}
	{"level":"info","ts":"2024-03-21T00:19:38.878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 received MsgVoteResp from f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-21T00:19:38.878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f074a195de705325 became leader at term 2"}
	{"level":"info","ts":"2024-03-21T00:19:38.878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f074a195de705325 elected leader f074a195de705325 at term 2"}
	{"level":"info","ts":"2024-03-21T00:19:38.878Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"f074a195de705325","local-member-attributes":"{Name:running-upgrade-120000 ClientURLs:[https://10.0.2.15:2379]}","request-path":"/0/members/f074a195de705325/attributes","cluster-id":"ef296cf39f5d9d66","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-21T00:19:38.878Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-21T00:19:38.879Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-21T00:19:38.878Z","caller":"etcdserver/server.go:2507","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-21T00:19:38.883Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ef296cf39f5d9d66","local-member-id":"f074a195de705325","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-21T00:19:38.883Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-21T00:19:38.882Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-21T00:19:38.893Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-21T00:19:38.878Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-21T00:19:38.893Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"10.0.2.15:2379"}
	{"level":"info","ts":"2024-03-21T00:19:38.895Z","caller":"etcdserver/server.go:2531","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 00:24:00 up 9 min,  0 users,  load average: 0.38, 0.43, 0.24
	Linux running-upgrade-120000 5.10.57 #1 SMP PREEMPT Thu Jun 16 21:01:29 UTC 2022 aarch64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [f64a8e99608b] <==
	I0321 00:19:40.184520       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0321 00:19:40.185618       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0321 00:19:40.185625       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0321 00:19:40.186295       1 cache.go:39] Caches are synced for autoregister controller
	I0321 00:19:40.188120       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0321 00:19:40.198270       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0321 00:19:40.204404       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0321 00:19:40.917607       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0321 00:19:41.094016       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0321 00:19:41.099288       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0321 00:19:41.099365       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0321 00:19:41.240327       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0321 00:19:41.252887       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0321 00:19:41.365358       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0321 00:19:41.367338       1 lease.go:234] Resetting endpoints for master service "kubernetes" to [10.0.2.15]
	I0321 00:19:41.367682       1 controller.go:611] quota admission added evaluator for: endpoints
	I0321 00:19:41.368911       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0321 00:19:42.243875       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0321 00:19:42.700006       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0321 00:19:42.703233       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0321 00:19:42.712075       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0321 00:19:42.753800       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0321 00:19:55.849664       1 controller.go:611] quota admission added evaluator for: replicasets.apps
	I0321 00:19:55.897188       1 controller.go:611] quota admission added evaluator for: controllerrevisions.apps
	I0321 00:19:57.046987       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	
	==> kube-controller-manager [c8393305b944] <==
	I0321 00:19:55.253428       1 shared_informer.go:262] Caches are synced for taint
	I0321 00:19:55.253458       1 range_allocator.go:173] Starting range CIDR allocator
	I0321 00:19:55.253489       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0321 00:19:55.253510       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0321 00:19:55.253565       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0321 00:19:55.253467       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0321 00:19:55.253662       1 node_lifecycle_controller.go:1014] Missing timestamp for Node running-upgrade-120000. Assuming now as a timestamp.
	I0321 00:19:55.253713       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0321 00:19:55.253714       1 event.go:294] "Event occurred" object="running-upgrade-120000" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node running-upgrade-120000 event: Registered Node running-upgrade-120000 in Controller"
	I0321 00:19:55.255554       1 shared_informer.go:262] Caches are synced for resource quota
	I0321 00:19:55.255789       1 shared_informer.go:262] Caches are synced for persistent volume
	I0321 00:19:55.256535       1 range_allocator.go:374] Set node running-upgrade-120000 PodCIDR to [10.244.0.0/24]
	I0321 00:19:55.258937       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0321 00:19:55.266721       1 shared_informer.go:262] Caches are synced for TTL
	I0321 00:19:55.268417       1 shared_informer.go:262] Caches are synced for GC
	I0321 00:19:55.295027       1 shared_informer.go:262] Caches are synced for attach detach
	I0321 00:19:55.297275       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0321 00:19:55.297291       1 shared_informer.go:262] Caches are synced for resource quota
	I0321 00:19:55.711559       1 shared_informer.go:262] Caches are synced for garbage collector
	I0321 00:19:55.762332       1 shared_informer.go:262] Caches are synced for garbage collector
	I0321 00:19:55.762346       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0321 00:19:55.851100       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-6d4b75cb6d to 2"
	I0321 00:19:55.899751       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-26xdv"
	I0321 00:19:56.098861       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-55469"
	I0321 00:19:56.102243       1 event.go:294] "Event occurred" object="kube-system/coredns-6d4b75cb6d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-6d4b75cb6d-t2kcw"
	
	
	==> kube-proxy [86ac7ef755ac] <==
	I0321 00:19:57.020141       1 node.go:163] Successfully retrieved node IP: 10.0.2.15
	I0321 00:19:57.020166       1 server_others.go:138] "Detected node IP" address="10.0.2.15"
	I0321 00:19:57.020177       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0321 00:19:57.044810       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0321 00:19:57.044823       1 server_others.go:206] "Using iptables Proxier"
	I0321 00:19:57.044841       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0321 00:19:57.044945       1 server.go:661] "Version info" version="v1.24.1"
	I0321 00:19:57.044950       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0321 00:19:57.045232       1 config.go:317] "Starting service config controller"
	I0321 00:19:57.045238       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0321 00:19:57.045246       1 config.go:226] "Starting endpoint slice config controller"
	I0321 00:19:57.045247       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0321 00:19:57.046061       1 config.go:444] "Starting node config controller"
	I0321 00:19:57.046063       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0321 00:19:57.146610       1 shared_informer.go:262] Caches are synced for node config
	I0321 00:19:57.146637       1 shared_informer.go:262] Caches are synced for service config
	I0321 00:19:57.146651       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a89aa0467f21] <==
	W0321 00:19:40.166180       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0321 00:19:40.166188       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0321 00:19:40.166255       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0321 00:19:40.166263       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0321 00:19:40.166473       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0321 00:19:40.166481       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0321 00:19:40.166495       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0321 00:19:40.166505       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0321 00:19:40.166525       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0321 00:19:40.166533       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0321 00:19:40.166570       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0321 00:19:40.166583       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0321 00:19:40.166571       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0321 00:19:40.166624       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0321 00:19:41.036274       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0321 00:19:41.036663       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0321 00:19:41.116327       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0321 00:19:41.116488       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0321 00:19:41.144500       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0321 00:19:41.144519       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0321 00:19:41.154450       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0321 00:19:41.154583       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0321 00:19:41.158989       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0321 00:19:41.159001       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0321 00:19:43.764899       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Thu 2024-03-21 00:14:19 UTC, ends at Thu 2024-03-21 00:24:00 UTC. --
	Mar 21 00:19:43 running-upgrade-120000 kubelet[14220]: I0321 00:19:43.042694   14220 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/56fbfb82921740999d043e5ca869ce42-k8s-certs\") pod \"kube-apiserver-running-upgrade-120000\" (UID: \"56fbfb82921740999d043e5ca869ce42\") " pod="kube-system/kube-apiserver-running-upgrade-120000"
	Mar 21 00:19:43 running-upgrade-120000 kubelet[14220]: I0321 00:19:43.726673   14220 apiserver.go:52] "Watching apiserver"
	Mar 21 00:19:43 running-upgrade-120000 kubelet[14220]: I0321 00:19:43.948155   14220 reconciler.go:157] "Reconciler: start to sync state"
	Mar 21 00:19:44 running-upgrade-120000 kubelet[14220]: E0321 00:19:44.330984   14220 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"etcd-running-upgrade-120000\" already exists" pod="kube-system/etcd-running-upgrade-120000"
	Mar 21 00:19:44 running-upgrade-120000 kubelet[14220]: E0321 00:19:44.532163   14220 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-scheduler-running-upgrade-120000\" already exists" pod="kube-system/kube-scheduler-running-upgrade-120000"
	Mar 21 00:19:44 running-upgrade-120000 kubelet[14220]: E0321 00:19:44.732598   14220 kubelet.go:1690] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-running-upgrade-120000\" already exists" pod="kube-system/kube-controller-manager-running-upgrade-120000"
	Mar 21 00:19:55 running-upgrade-120000 kubelet[14220]: I0321 00:19:55.260987   14220 topology_manager.go:200] "Topology Admit Handler"
	Mar 21 00:19:55 running-upgrade-120000 kubelet[14220]: I0321 00:19:55.346844   14220 kuberuntime_manager.go:1095] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 21 00:19:55 running-upgrade-120000 kubelet[14220]: I0321 00:19:55.346939   14220 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8976231c-9909-4be3-a2a1-90f43907ed6b-tmp\") pod \"storage-provisioner\" (UID: \"8976231c-9909-4be3-a2a1-90f43907ed6b\") " pod="kube-system/storage-provisioner"
	Mar 21 00:19:55 running-upgrade-120000 kubelet[14220]: I0321 00:19:55.347213   14220 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 21 00:19:55 running-upgrade-120000 kubelet[14220]: I0321 00:19:55.447835   14220 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5q56\" (UniqueName: \"kubernetes.io/projected/8976231c-9909-4be3-a2a1-90f43907ed6b-kube-api-access-j5q56\") pod \"storage-provisioner\" (UID: \"8976231c-9909-4be3-a2a1-90f43907ed6b\") " pod="kube-system/storage-provisioner"
	Mar 21 00:19:55 running-upgrade-120000 kubelet[14220]: I0321 00:19:55.902982   14220 topology_manager.go:200] "Topology Admit Handler"
	Mar 21 00:19:56 running-upgrade-120000 kubelet[14220]: I0321 00:19:56.049983   14220 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5e0a5439-2efc-485a-b803-74fa0624e388-xtables-lock\") pod \"kube-proxy-26xdv\" (UID: \"5e0a5439-2efc-485a-b803-74fa0624e388\") " pod="kube-system/kube-proxy-26xdv"
	Mar 21 00:19:56 running-upgrade-120000 kubelet[14220]: I0321 00:19:56.050008   14220 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5e0a5439-2efc-485a-b803-74fa0624e388-kube-proxy\") pod \"kube-proxy-26xdv\" (UID: \"5e0a5439-2efc-485a-b803-74fa0624e388\") " pod="kube-system/kube-proxy-26xdv"
	Mar 21 00:19:56 running-upgrade-120000 kubelet[14220]: I0321 00:19:56.050029   14220 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5e0a5439-2efc-485a-b803-74fa0624e388-lib-modules\") pod \"kube-proxy-26xdv\" (UID: \"5e0a5439-2efc-485a-b803-74fa0624e388\") " pod="kube-system/kube-proxy-26xdv"
	Mar 21 00:19:56 running-upgrade-120000 kubelet[14220]: I0321 00:19:56.050041   14220 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46nwg\" (UniqueName: \"kubernetes.io/projected/5e0a5439-2efc-485a-b803-74fa0624e388-kube-api-access-46nwg\") pod \"kube-proxy-26xdv\" (UID: \"5e0a5439-2efc-485a-b803-74fa0624e388\") " pod="kube-system/kube-proxy-26xdv"
	Mar 21 00:19:56 running-upgrade-120000 kubelet[14220]: I0321 00:19:56.102285   14220 topology_manager.go:200] "Topology Admit Handler"
	Mar 21 00:19:56 running-upgrade-120000 kubelet[14220]: I0321 00:19:56.108125   14220 topology_manager.go:200] "Topology Admit Handler"
	Mar 21 00:19:56 running-upgrade-120000 kubelet[14220]: I0321 00:19:56.251090   14220 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eed7a443-15f3-4f28-bcba-9e039422ac15-config-volume\") pod \"coredns-6d4b75cb6d-55469\" (UID: \"eed7a443-15f3-4f28-bcba-9e039422ac15\") " pod="kube-system/coredns-6d4b75cb6d-55469"
	Mar 21 00:19:56 running-upgrade-120000 kubelet[14220]: I0321 00:19:56.251133   14220 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b28a63b-d7ed-4c38-a604-71602d956d8a-config-volume\") pod \"coredns-6d4b75cb6d-t2kcw\" (UID: \"0b28a63b-d7ed-4c38-a604-71602d956d8a\") " pod="kube-system/coredns-6d4b75cb6d-t2kcw"
	Mar 21 00:19:56 running-upgrade-120000 kubelet[14220]: I0321 00:19:56.251154   14220 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84tht\" (UniqueName: \"kubernetes.io/projected/0b28a63b-d7ed-4c38-a604-71602d956d8a-kube-api-access-84tht\") pod \"coredns-6d4b75cb6d-t2kcw\" (UID: \"0b28a63b-d7ed-4c38-a604-71602d956d8a\") " pod="kube-system/coredns-6d4b75cb6d-t2kcw"
	Mar 21 00:19:56 running-upgrade-120000 kubelet[14220]: I0321 00:19:56.251168   14220 reconciler.go:270] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhg42\" (UniqueName: \"kubernetes.io/projected/eed7a443-15f3-4f28-bcba-9e039422ac15-kube-api-access-rhg42\") pod \"coredns-6d4b75cb6d-55469\" (UID: \"eed7a443-15f3-4f28-bcba-9e039422ac15\") " pod="kube-system/coredns-6d4b75cb6d-55469"
	Mar 21 00:19:56 running-upgrade-120000 kubelet[14220]: I0321 00:19:56.854413   14220 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="30a93e7f9a273e4980052012809ca957a2795d046ead6e82d2e862d8e8e5fadc"
	Mar 21 00:23:44 running-upgrade-120000 kubelet[14220]: I0321 00:23:44.086047   14220 scope.go:110] "RemoveContainer" containerID="a7ad224e3b144f92b0baacb3fc25b2f54bc99d4cabbbced0ccbba3c6c5ba6e98"
	Mar 21 00:23:45 running-upgrade-120000 kubelet[14220]: I0321 00:23:45.117477   14220 scope.go:110] "RemoveContainer" containerID="b9570a8a8ad9809446f10c10656736ab575355ade3ab1a98483d066ba3b0760a"
	
	
	==> storage-provisioner [59187ac0d704] <==
	I0321 00:19:55.747322       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0321 00:19:55.750782       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0321 00:19:55.750798       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0321 00:19:55.754474       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0321 00:19:55.754712       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_running-upgrade-120000_6e6e5c75-c26f-456a-b7dc-7f9333c977c8!
	I0321 00:19:55.756353       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"26db9f95-0c9d-40f7-a53b-f4a6b3a88a90", APIVersion:"v1", ResourceVersion:"346", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' running-upgrade-120000_6e6e5c75-c26f-456a-b7dc-7f9333c977c8 became leader
	I0321 00:19:55.855806       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_running-upgrade-120000_6e6e5c75-c26f-456a-b7dc-7f9333c977c8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-120000 -n running-upgrade-120000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.APIServer}} -p running-upgrade-120000 -n running-upgrade-120000: exit status 2 (15.577838042s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "running-upgrade-120000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "running-upgrade-120000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p running-upgrade-120000
--- FAIL: TestRunningBinaryUpgrade (661.56s)

                                                
                                    
x
+
TestKubernetesUpgrade (18.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-990000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-990000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (9.76082475s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-990000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubernetes-upgrade-990000" primary control-plane node in "kubernetes-upgrade-990000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubernetes-upgrade-990000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:12:56.376964   14403 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:12:56.377099   14403 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:12:56.377103   14403 out.go:304] Setting ErrFile to fd 2...
	I0320 17:12:56.377105   14403 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:12:56.377230   14403 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:12:56.378304   14403 out.go:298] Setting JSON to false
	I0320 17:12:56.394522   14403 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7946,"bootTime":1710972030,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:12:56.394580   14403 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:12:56.399098   14403 out.go:177] * [kubernetes-upgrade-990000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:12:56.413072   14403 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:12:56.417961   14403 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:12:56.413089   14403 notify.go:220] Checking for updates...
	I0320 17:12:56.427035   14403 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:12:56.429986   14403 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:12:56.433018   14403 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:12:56.436062   14403 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:12:56.439380   14403 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:12:56.439467   14403 config.go:182] Loaded profile config "offline-docker-761000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:12:56.439522   14403 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:12:56.443994   14403 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:12:56.450986   14403 start.go:297] selected driver: qemu2
	I0320 17:12:56.450994   14403 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:12:56.451000   14403 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:12:56.453440   14403 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:12:56.456030   14403 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:12:56.459076   14403 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0320 17:12:56.459115   14403 cni.go:84] Creating CNI manager for ""
	I0320 17:12:56.459125   14403 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0320 17:12:56.459150   14403 start.go:340] cluster config:
	{Name:kubernetes-upgrade-990000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:12:56.464052   14403 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:12:56.470970   14403 out.go:177] * Starting "kubernetes-upgrade-990000" primary control-plane node in "kubernetes-upgrade-990000" cluster
	I0320 17:12:56.474848   14403 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0320 17:12:56.474865   14403 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0320 17:12:56.474873   14403 cache.go:56] Caching tarball of preloaded images
	I0320 17:12:56.474935   14403 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:12:56.474941   14403 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0320 17:12:56.475005   14403 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/kubernetes-upgrade-990000/config.json ...
	I0320 17:12:56.475017   14403 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/kubernetes-upgrade-990000/config.json: {Name:mkcb7cc614cf73753abec11e9ead70e4fd4d462f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:12:56.475265   14403 start.go:360] acquireMachinesLock for kubernetes-upgrade-990000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:12:56.475303   14403 start.go:364] duration metric: took 29.792µs to acquireMachinesLock for "kubernetes-upgrade-990000"
	I0320 17:12:56.475318   14403 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:12:56.475354   14403 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:12:56.482893   14403 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:12:56.501971   14403 start.go:159] libmachine.API.Create for "kubernetes-upgrade-990000" (driver="qemu2")
	I0320 17:12:56.502004   14403 client.go:168] LocalClient.Create starting
	I0320 17:12:56.502074   14403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:12:56.502108   14403 main.go:141] libmachine: Decoding PEM data...
	I0320 17:12:56.502119   14403 main.go:141] libmachine: Parsing certificate...
	I0320 17:12:56.502181   14403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:12:56.502210   14403 main.go:141] libmachine: Decoding PEM data...
	I0320 17:12:56.502220   14403 main.go:141] libmachine: Parsing certificate...
	I0320 17:12:56.502694   14403 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:12:56.644846   14403 main.go:141] libmachine: Creating SSH key...
	I0320 17:12:56.728966   14403 main.go:141] libmachine: Creating Disk image...
	I0320 17:12:56.728972   14403 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:12:56.729160   14403 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/disk.qcow2
	I0320 17:12:56.742029   14403 main.go:141] libmachine: STDOUT: 
	I0320 17:12:56.742045   14403 main.go:141] libmachine: STDERR: 
	I0320 17:12:56.742097   14403 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/disk.qcow2 +20000M
	I0320 17:12:56.753113   14403 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:12:56.753134   14403 main.go:141] libmachine: STDERR: 
	I0320 17:12:56.753151   14403 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/disk.qcow2
	I0320 17:12:56.753155   14403 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:12:56.753187   14403 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ae:86:9b:bb:f4:39 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/disk.qcow2
	I0320 17:12:56.754985   14403 main.go:141] libmachine: STDOUT: 
	I0320 17:12:56.755000   14403 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:12:56.755033   14403 client.go:171] duration metric: took 253.025917ms to LocalClient.Create
	I0320 17:12:58.757201   14403 start.go:128] duration metric: took 2.281883583s to createHost
	I0320 17:12:58.757279   14403 start.go:83] releasing machines lock for "kubernetes-upgrade-990000", held for 2.282043375s
	W0320 17:12:58.757373   14403 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:12:58.768445   14403 out.go:177] * Deleting "kubernetes-upgrade-990000" in qemu2 ...
	W0320 17:12:58.796169   14403 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:12:58.796194   14403 start.go:728] Will try again in 5 seconds ...
	I0320 17:13:03.798042   14403 start.go:360] acquireMachinesLock for kubernetes-upgrade-990000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:13:03.798109   14403 start.go:364] duration metric: took 51µs to acquireMachinesLock for "kubernetes-upgrade-990000"
	I0320 17:13:03.798124   14403 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:13:03.798179   14403 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:13:03.806665   14403 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:13:03.822069   14403 start.go:159] libmachine.API.Create for "kubernetes-upgrade-990000" (driver="qemu2")
	I0320 17:13:03.822099   14403 client.go:168] LocalClient.Create starting
	I0320 17:13:03.822167   14403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:13:03.822193   14403 main.go:141] libmachine: Decoding PEM data...
	I0320 17:13:03.822204   14403 main.go:141] libmachine: Parsing certificate...
	I0320 17:13:03.822259   14403 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:13:03.822275   14403 main.go:141] libmachine: Decoding PEM data...
	I0320 17:13:03.822282   14403 main.go:141] libmachine: Parsing certificate...
	I0320 17:13:03.822622   14403 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:13:03.997048   14403 main.go:141] libmachine: Creating SSH key...
	I0320 17:13:04.034677   14403 main.go:141] libmachine: Creating Disk image...
	I0320 17:13:04.034683   14403 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:13:04.034858   14403 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/disk.qcow2
	I0320 17:13:04.047197   14403 main.go:141] libmachine: STDOUT: 
	I0320 17:13:04.047215   14403 main.go:141] libmachine: STDERR: 
	I0320 17:13:04.047267   14403 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/disk.qcow2 +20000M
	I0320 17:13:04.058287   14403 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:13:04.058303   14403 main.go:141] libmachine: STDERR: 
	I0320 17:13:04.058315   14403 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/disk.qcow2
	I0320 17:13:04.058321   14403 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:13:04.058355   14403 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:3a:7e:d9:79:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/disk.qcow2
	I0320 17:13:04.060135   14403 main.go:141] libmachine: STDOUT: 
	I0320 17:13:04.060151   14403 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:13:04.060164   14403 client.go:171] duration metric: took 238.070375ms to LocalClient.Create
	I0320 17:13:06.062290   14403 start.go:128] duration metric: took 2.264160125s to createHost
	I0320 17:13:06.062431   14403 start.go:83] releasing machines lock for "kubernetes-upgrade-990000", held for 2.264334667s
	W0320 17:13:06.062847   14403 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-990000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-990000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:13:06.073443   14403 out.go:177] 
	W0320 17:13:06.080606   14403 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:13:06.080650   14403 out.go:239] * 
	* 
	W0320 17:13:06.083034   14403 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:13:06.093477   14403 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-arm64 start -p kubernetes-upgrade-990000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-arm64 stop -p kubernetes-upgrade-990000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-arm64 stop -p kubernetes-upgrade-990000: (3.000192416s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-arm64 -p kubernetes-upgrade-990000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p kubernetes-upgrade-990000 status --format={{.Host}}: exit status 7 (48.393875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 start -p kubernetes-upgrade-990000 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubernetes-upgrade-990000 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (5.189127125s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-990000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "kubernetes-upgrade-990000" primary control-plane node in "kubernetes-upgrade-990000" cluster
	* Restarting existing qemu2 VM for "kubernetes-upgrade-990000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "kubernetes-upgrade-990000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:13:09.190495   14451 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:13:09.190643   14451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:13:09.190648   14451 out.go:304] Setting ErrFile to fd 2...
	I0320 17:13:09.190650   14451 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:13:09.190795   14451 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:13:09.191819   14451 out.go:298] Setting JSON to false
	I0320 17:13:09.208628   14451 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7959,"bootTime":1710972030,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:13:09.208733   14451 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:13:09.213547   14451 out.go:177] * [kubernetes-upgrade-990000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:13:09.217536   14451 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:13:09.221566   14451 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:13:09.217564   14451 notify.go:220] Checking for updates...
	I0320 17:13:09.227553   14451 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:13:09.230547   14451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:13:09.233484   14451 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:13:09.236515   14451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:13:09.239827   14451 config.go:182] Loaded profile config "kubernetes-upgrade-990000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0320 17:13:09.240122   14451 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:13:09.244450   14451 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:13:09.251557   14451 start.go:297] selected driver: qemu2
	I0320 17:13:09.251562   14451 start.go:901] validating driver "qemu2" against &{Name:kubernetes-upgrade-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-990000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:13:09.251612   14451 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:13:09.253900   14451 cni.go:84] Creating CNI manager for ""
	I0320 17:13:09.253916   14451 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:13:09.253941   14451 start.go:340] cluster config:
	{Name:kubernetes-upgrade-990000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:kubernetes-upgrade-990000 Nam
espace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePat
h: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:13:09.258110   14451 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:13:09.266483   14451 out.go:177] * Starting "kubernetes-upgrade-990000" primary control-plane node in "kubernetes-upgrade-990000" cluster
	I0320 17:13:09.269400   14451 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0320 17:13:09.269413   14451 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0320 17:13:09.269420   14451 cache.go:56] Caching tarball of preloaded images
	I0320 17:13:09.269464   14451 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:13:09.269469   14451 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0320 17:13:09.269516   14451 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/kubernetes-upgrade-990000/config.json ...
	I0320 17:13:09.269974   14451 start.go:360] acquireMachinesLock for kubernetes-upgrade-990000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:13:09.270001   14451 start.go:364] duration metric: took 21.542µs to acquireMachinesLock for "kubernetes-upgrade-990000"
	I0320 17:13:09.270009   14451 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:13:09.270014   14451 fix.go:54] fixHost starting: 
	I0320 17:13:09.270117   14451 fix.go:112] recreateIfNeeded on kubernetes-upgrade-990000: state=Stopped err=<nil>
	W0320 17:13:09.270126   14451 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:13:09.278383   14451 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-990000" ...
	I0320 17:13:09.282504   14451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:3a:7e:d9:79:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/disk.qcow2
	I0320 17:13:09.284416   14451 main.go:141] libmachine: STDOUT: 
	I0320 17:13:09.284436   14451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:13:09.284464   14451 fix.go:56] duration metric: took 14.449666ms for fixHost
	I0320 17:13:09.284469   14451 start.go:83] releasing machines lock for "kubernetes-upgrade-990000", held for 14.465042ms
	W0320 17:13:09.284475   14451 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:13:09.284506   14451 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:13:09.284510   14451 start.go:728] Will try again in 5 seconds ...
	I0320 17:13:14.286240   14451 start.go:360] acquireMachinesLock for kubernetes-upgrade-990000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:13:14.286626   14451 start.go:364] duration metric: took 282.583µs to acquireMachinesLock for "kubernetes-upgrade-990000"
	I0320 17:13:14.286758   14451 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:13:14.286781   14451 fix.go:54] fixHost starting: 
	I0320 17:13:14.287533   14451 fix.go:112] recreateIfNeeded on kubernetes-upgrade-990000: state=Stopped err=<nil>
	W0320 17:13:14.287562   14451 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:13:14.297159   14451 out.go:177] * Restarting existing qemu2 VM for "kubernetes-upgrade-990000" ...
	I0320 17:13:14.301264   14451 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:3a:7e:d9:79:4a -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubernetes-upgrade-990000/disk.qcow2
	I0320 17:13:14.311456   14451 main.go:141] libmachine: STDOUT: 
	I0320 17:13:14.311533   14451 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:13:14.311595   14451 fix.go:56] duration metric: took 24.817459ms for fixHost
	I0320 17:13:14.311618   14451 start.go:83] releasing machines lock for "kubernetes-upgrade-990000", held for 24.968542ms
	W0320 17:13:14.311838   14451 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-990000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubernetes-upgrade-990000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:13:14.317837   14451 out.go:177] 
	W0320 17:13:14.322180   14451 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:13:14.322216   14451 out.go:239] * 
	* 
	W0320 17:13:14.324747   14451 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:13:14.335212   14451 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-darwin-arm64 start -p kubernetes-upgrade-990000 --memory=2200 --kubernetes-version=v1.30.0-beta.0 --alsologtostderr -v=1 --driver=qemu2  : exit status 80
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-990000 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-990000 version --output=json: exit status 1 (64.326125ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-990000" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:626: *** TestKubernetesUpgrade FAILED at 2024-03-20 17:13:14.414717 -0700 PDT m=+780.640554668
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-990000 -n kubernetes-upgrade-990000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p kubernetes-upgrade-990000 -n kubernetes-upgrade-990000: exit status 7 (36.0255ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-990000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-990000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p kubernetes-upgrade-990000
--- FAIL: TestKubernetesUpgrade (18.20s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (616.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2161533902 start -p stopped-upgrade-184000 --memory=2200 --vm-driver=qemu2 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2161533902 start -p stopped-upgrade-184000 --memory=2200 --vm-driver=qemu2 : (1m20.639856958s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2161533902 -p stopped-upgrade-184000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/minikube-v1.26.0.2161533902 -p stopped-upgrade-184000 stop: (12.131820875s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-arm64 start -p stopped-upgrade-184000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 
version_upgrade_test.go:198: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p stopped-upgrade-184000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80 (8m43.379937708s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-184000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "stopped-upgrade-184000" primary control-plane node in "stopped-upgrade-184000" cluster
	* Restarting existing qemu2 VM for "stopped-upgrade-184000" ...
	* Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:14:41.746354   14525 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:14:41.746545   14525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:14:41.746549   14525 out.go:304] Setting ErrFile to fd 2...
	I0320 17:14:41.746551   14525 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:14:41.746722   14525 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:14:41.748233   14525 out.go:298] Setting JSON to false
	I0320 17:14:41.765942   14525 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8051,"bootTime":1710972030,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:14:41.766015   14525 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:14:41.770974   14525 out.go:177] * [stopped-upgrade-184000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:14:41.778591   14525 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:14:41.778591   14525 notify.go:220] Checking for updates...
	I0320 17:14:41.783097   14525 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:14:41.787153   14525 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:14:41.788586   14525 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:14:41.792172   14525 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:14:41.795144   14525 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:14:41.798495   14525 config.go:182] Loaded profile config "stopped-upgrade-184000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0320 17:14:41.802149   14525 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0320 17:14:41.805396   14525 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:14:41.810112   14525 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:14:41.817185   14525 start.go:297] selected driver: qemu2
	I0320 17:14:41.817202   14525 start.go:901] validating driver "qemu2" against &{Name:stopped-upgrade-184000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52304 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgra
de-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0320 17:14:41.817263   14525 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:14:41.821022   14525 cni.go:84] Creating CNI manager for ""
	I0320 17:14:41.821051   14525 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:14:41.821090   14525 start.go:340] cluster config:
	{Name:stopped-upgrade-184000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52304 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0320 17:14:41.821140   14525 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:14:41.826181   14525 out.go:177] * Starting "stopped-upgrade-184000" primary control-plane node in "stopped-upgrade-184000" cluster
	I0320 17:14:41.834150   14525 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0320 17:14:41.834166   14525 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4
	I0320 17:14:41.834179   14525 cache.go:56] Caching tarball of preloaded images
	I0320 17:14:41.834231   14525 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:14:41.834236   14525 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on docker
	I0320 17:14:41.834288   14525 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/config.json ...
	I0320 17:14:41.834787   14525 start.go:360] acquireMachinesLock for stopped-upgrade-184000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:14:41.834819   14525 start.go:364] duration metric: took 26.292µs to acquireMachinesLock for "stopped-upgrade-184000"
	I0320 17:14:41.834839   14525 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:14:41.835008   14525 fix.go:54] fixHost starting: 
	I0320 17:14:41.835119   14525 fix.go:112] recreateIfNeeded on stopped-upgrade-184000: state=Stopped err=<nil>
	W0320 17:14:41.835127   14525 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:14:41.840768   14525 out.go:177] * Restarting existing qemu2 VM for "stopped-upgrade-184000" ...
	I0320 17:14:41.845366   14525 main.go:141] libmachine: executing: qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/Cellar/qemu/8.2.1/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/qemu.pid -nic user,model=virtio,hostfwd=tcp::52272-:22,hostfwd=tcp::52273-:2376,hostname=stopped-upgrade-184000 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/disk.qcow2
	I0320 17:14:41.890495   14525 main.go:141] libmachine: STDOUT: 
	I0320 17:14:41.890520   14525 main.go:141] libmachine: STDERR: 
	I0320 17:14:41.890527   14525 main.go:141] libmachine: Waiting for VM to start (ssh -p 52272 docker@127.0.0.1)...
	I0320 17:15:02.700458   14525 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/config.json ...
	I0320 17:15:02.700906   14525 machine.go:94] provisionDockerMachine start ...
	I0320 17:15:02.700977   14525 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:02.701256   14525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104631bf0] 0x104634450 <nil>  [] 0s} localhost 52272 <nil> <nil>}
	I0320 17:15:02.701265   14525 main.go:141] libmachine: About to run SSH command:
	hostname
	I0320 17:15:02.784780   14525 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0320 17:15:02.784832   14525 buildroot.go:166] provisioning hostname "stopped-upgrade-184000"
	I0320 17:15:02.784958   14525 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:02.785143   14525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104631bf0] 0x104634450 <nil>  [] 0s} localhost 52272 <nil> <nil>}
	I0320 17:15:02.785152   14525 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-184000 && echo "stopped-upgrade-184000" | sudo tee /etc/hostname
	I0320 17:15:02.859961   14525 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-184000
	
	I0320 17:15:02.860051   14525 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:02.860179   14525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104631bf0] 0x104634450 <nil>  [] 0s} localhost 52272 <nil> <nil>}
	I0320 17:15:02.860187   14525 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-184000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-184000/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-184000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0320 17:15:02.929789   14525 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0320 17:15:02.929804   14525 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/17713-12248/.minikube CaCertPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/17713-12248/.minikube}
	I0320 17:15:02.929817   14525 buildroot.go:174] setting up certificates
	I0320 17:15:02.929821   14525 provision.go:84] configureAuth start
	I0320 17:15:02.929842   14525 provision.go:143] copyHostCerts
	I0320 17:15:02.929923   14525 exec_runner.go:144] found /Users/jenkins/minikube-integration/17713-12248/.minikube/key.pem, removing ...
	I0320 17:15:02.929928   14525 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17713-12248/.minikube/key.pem
	I0320 17:15:02.930025   14525 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/17713-12248/.minikube/key.pem (1679 bytes)
	I0320 17:15:02.930952   14525 exec_runner.go:144] found /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.pem, removing ...
	I0320 17:15:02.930958   14525 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.pem
	I0320 17:15:02.931016   14525 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.pem (1082 bytes)
	I0320 17:15:02.931134   14525 exec_runner.go:144] found /Users/jenkins/minikube-integration/17713-12248/.minikube/cert.pem, removing ...
	I0320 17:15:02.931138   14525 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/17713-12248/.minikube/cert.pem
	I0320 17:15:02.931177   14525 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/17713-12248/.minikube/cert.pem (1123 bytes)
	I0320 17:15:02.931278   14525 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-184000 san=[127.0.0.1 localhost minikube stopped-upgrade-184000]
	I0320 17:15:02.980068   14525 provision.go:177] copyRemoteCerts
	I0320 17:15:02.980140   14525 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0320 17:15:02.980150   14525 sshutil.go:53] new ssh client: &{IP:localhost Port:52272 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/id_rsa Username:docker}
	I0320 17:15:03.017385   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0320 17:15:03.024620   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0320 17:15:03.032135   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0320 17:15:03.039245   14525 provision.go:87] duration metric: took 109.418875ms to configureAuth
	I0320 17:15:03.039255   14525 buildroot.go:189] setting minikube options for container-runtime
	I0320 17:15:03.039360   14525 config.go:182] Loaded profile config "stopped-upgrade-184000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0320 17:15:03.039401   14525 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:03.039489   14525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104631bf0] 0x104634450 <nil>  [] 0s} localhost 52272 <nil> <nil>}
	I0320 17:15:03.039494   14525 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0320 17:15:03.108134   14525 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0320 17:15:03.108145   14525 buildroot.go:70] root file system type: tmpfs
	I0320 17:15:03.108207   14525 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0320 17:15:03.108260   14525 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:03.108371   14525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104631bf0] 0x104634450 <nil>  [] 0s} localhost 52272 <nil> <nil>}
	I0320 17:15:03.108404   14525 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0320 17:15:03.178872   14525 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=qemu2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0320 17:15:03.178925   14525 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:03.179037   14525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104631bf0] 0x104634450 <nil>  [] 0s} localhost 52272 <nil> <nil>}
	I0320 17:15:03.179046   14525 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0320 17:15:03.540367   14525 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0320 17:15:03.540383   14525 machine.go:97] duration metric: took 839.501541ms to provisionDockerMachine
	I0320 17:15:03.540390   14525 start.go:293] postStartSetup for "stopped-upgrade-184000" (driver="qemu2")
	I0320 17:15:03.540397   14525 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0320 17:15:03.540484   14525 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0320 17:15:03.540497   14525 sshutil.go:53] new ssh client: &{IP:localhost Port:52272 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/id_rsa Username:docker}
	I0320 17:15:03.577640   14525 ssh_runner.go:195] Run: cat /etc/os-release
	I0320 17:15:03.578944   14525 info.go:137] Remote host: Buildroot 2021.02.12
	I0320 17:15:03.578951   14525 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17713-12248/.minikube/addons for local assets ...
	I0320 17:15:03.579015   14525 filesync.go:126] Scanning /Users/jenkins/minikube-integration/17713-12248/.minikube/files for local assets ...
	I0320 17:15:03.579107   14525 filesync.go:149] local asset: /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem -> 126652.pem in /etc/ssl/certs
	I0320 17:15:03.579194   14525 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0320 17:15:03.581852   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem --> /etc/ssl/certs/126652.pem (1708 bytes)
	I0320 17:15:03.588725   14525 start.go:296] duration metric: took 48.329208ms for postStartSetup
	I0320 17:15:03.588747   14525 fix.go:56] duration metric: took 21.75475225s for fixHost
	I0320 17:15:03.588797   14525 main.go:141] libmachine: Using SSH client type: native
	I0320 17:15:03.588910   14525 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x104631bf0] 0x104634450 <nil>  [] 0s} localhost 52272 <nil> <nil>}
	I0320 17:15:03.588916   14525 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0320 17:15:03.655671   14525 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710980103.725653129
	
	I0320 17:15:03.655682   14525 fix.go:216] guest clock: 1710980103.725653129
	I0320 17:15:03.655686   14525 fix.go:229] Guest: 2024-03-20 17:15:03.725653129 -0700 PDT Remote: 2024-03-20 17:15:03.58875 -0700 PDT m=+21.942645876 (delta=136.903129ms)
	I0320 17:15:03.655699   14525 fix.go:200] guest clock delta is within tolerance: 136.903129ms
	I0320 17:15:03.655701   14525 start.go:83] releasing machines lock for "stopped-upgrade-184000", held for 21.821728833s
	I0320 17:15:03.655787   14525 ssh_runner.go:195] Run: cat /version.json
	I0320 17:15:03.655796   14525 sshutil.go:53] new ssh client: &{IP:localhost Port:52272 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/id_rsa Username:docker}
	I0320 17:15:03.655870   14525 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0320 17:15:03.656191   14525 sshutil.go:53] new ssh client: &{IP:localhost Port:52272 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/id_rsa Username:docker}
	W0320 17:15:03.690822   14525 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0320 17:15:03.690899   14525 ssh_runner.go:195] Run: systemctl --version
	I0320 17:15:03.733070   14525 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0320 17:15:03.734841   14525 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0320 17:15:03.734868   14525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0320 17:15:03.737943   14525 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0320 17:15:03.743495   14525 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0320 17:15:03.743509   14525 start.go:494] detecting cgroup driver to use...
	I0320 17:15:03.744395   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0320 17:15:03.750746   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.7"|' /etc/containerd/config.toml"
	I0320 17:15:03.754142   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0320 17:15:03.757334   14525 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0320 17:15:03.757369   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0320 17:15:03.760743   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0320 17:15:03.763622   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0320 17:15:03.766508   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0320 17:15:03.769791   14525 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0320 17:15:03.773003   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0320 17:15:03.776020   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0320 17:15:03.779035   14525 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0320 17:15:03.782264   14525 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0320 17:15:03.785772   14525 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0320 17:15:03.789144   14525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:03.852015   14525 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0320 17:15:03.859992   14525 start.go:494] detecting cgroup driver to use...
	I0320 17:15:03.860071   14525 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0320 17:15:03.865276   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0320 17:15:03.870568   14525 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0320 17:15:03.877353   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0320 17:15:03.882789   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0320 17:15:03.888863   14525 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0320 17:15:03.937602   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0320 17:15:03.942951   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0320 17:15:03.949082   14525 ssh_runner.go:195] Run: which cri-dockerd
	I0320 17:15:03.950304   14525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0320 17:15:03.953447   14525 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0320 17:15:03.959219   14525 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0320 17:15:04.026641   14525 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0320 17:15:04.091720   14525 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0320 17:15:04.091793   14525 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0320 17:15:04.097244   14525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:04.161434   14525 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0320 17:15:05.291480   14525 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.130067541s)
	I0320 17:15:05.291559   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0320 17:15:05.298851   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0320 17:15:05.303441   14525 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0320 17:15:05.369856   14525 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0320 17:15:05.434544   14525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:05.496683   14525 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0320 17:15:05.502602   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0320 17:15:05.507623   14525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:05.567205   14525 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0320 17:15:05.605341   14525 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0320 17:15:05.605416   14525 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0320 17:15:05.607606   14525 start.go:562] Will wait 60s for crictl version
	I0320 17:15:05.607662   14525 ssh_runner.go:195] Run: which crictl
	I0320 17:15:05.609003   14525 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0320 17:15:05.623643   14525 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.16
	RuntimeApiVersion:  1.41.0
	I0320 17:15:05.623702   14525 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0320 17:15:05.639906   14525 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0320 17:15:05.659635   14525 out.go:204] * Preparing Kubernetes v1.24.1 on Docker 20.10.16 ...
	I0320 17:15:05.659758   14525 ssh_runner.go:195] Run: grep 10.0.2.2	host.minikube.internal$ /etc/hosts
	I0320 17:15:05.661167   14525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "10.0.2.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0320 17:15:05.665471   14525 kubeadm.go:877] updating cluster {Name:stopped-upgrade-184000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52304 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName
:stopped-upgrade-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s} ...
	I0320 17:15:05.665515   14525 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime docker
	I0320 17:15:05.665569   14525 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0320 17:15:05.676230   14525 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0320 17:15:05.676238   14525 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0320 17:15:05.676280   14525 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0320 17:15:05.679153   14525 ssh_runner.go:195] Run: which lz4
	I0320 17:15:05.680409   14525 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0320 17:15:05.681686   14525 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0320 17:15:05.681698   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-docker-overlay2-arm64.tar.lz4 --> /preloaded.tar.lz4 (359514331 bytes)
	I0320 17:15:06.423639   14525 docker.go:649] duration metric: took 743.286625ms to copy over tarball
	I0320 17:15:06.423698   14525 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0320 17:15:07.581148   14525 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.157475s)
	I0320 17:15:07.581163   14525 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0320 17:15:07.597482   14525 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0320 17:15:07.601585   14525 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2523 bytes)
	I0320 17:15:07.606734   14525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:07.669991   14525 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0320 17:15:09.394510   14525 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.724569375s)
	I0320 17:15:09.394811   14525 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0320 17:15:09.406544   14525 docker.go:685] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-apiserver:v1.24.1
	k8s.gcr.io/kube-proxy:v1.24.1
	k8s.gcr.io/kube-controller-manager:v1.24.1
	k8s.gcr.io/kube-scheduler:v1.24.1
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/coredns/coredns:v1.8.6
	<none>:<none>
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0320 17:15:09.406556   14525 docker.go:691] registry.k8s.io/kube-apiserver:v1.24.1 wasn't preloaded
	I0320 17:15:09.406561   14525 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.1 registry.k8s.io/kube-controller-manager:v1.24.1 registry.k8s.io/kube-scheduler:v1.24.1 registry.k8s.io/kube-proxy:v1.24.1 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0320 17:15:09.415135   14525 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0320 17:15:09.415146   14525 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:09.415221   14525 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:09.415223   14525 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:09.415274   14525 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:09.415275   14525 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:09.415315   14525 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:09.415135   14525 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:09.421789   14525 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:09.421871   14525 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:09.421975   14525 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:09.421988   14525 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:09.422118   14525 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:09.422213   14525 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0320 17:15:09.422204   14525 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:09.422293   14525 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	W0320 17:15:11.391245   14525 image.go:265] image registry.k8s.io/coredns/coredns:v1.8.6 arch mismatch: want arm64 got amd64. fixing
	I0320 17:15:11.391530   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:11.412708   14525 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "6af7f860a8197bfa3fdb7dec2061aa33870253e87a1e91c492d55b8a4fd38d14" in container runtime
	I0320 17:15:11.413810   14525 docker.go:337] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:11.413885   14525 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0320 17:15:11.427642   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6
	I0320 17:15:11.427762   14525 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I0320 17:15:11.429577   14525 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.8.6: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.8.6': No such file or directory
	I0320 17:15:11.429594   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 --> /var/lib/minikube/images/coredns_v1.8.6 (12318720 bytes)
	I0320 17:15:11.465256   14525 docker.go:304] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0320 17:15:11.465270   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.8.6 | docker load"
	I0320 17:15:11.488168   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0320 17:15:11.507936   14525 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0320 17:15:11.507985   14525 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "e5a475a0380575fb5df454b2e32bdec93e1ec0094d8a61e895b41567cb884550" in container runtime
	I0320 17:15:11.508000   14525 docker.go:337] Removing image: registry.k8s.io/pause:3.7
	I0320 17:15:11.508046   14525 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.7
	I0320 17:15:11.517907   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7
	I0320 17:15:11.518030   14525 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I0320 17:15:11.519665   14525 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.7: stat -c "%s %y" /var/lib/minikube/images/pause_3.7: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.7': No such file or directory
	I0320 17:15:11.519681   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 --> /var/lib/minikube/images/pause_3.7 (268288 bytes)
	I0320 17:15:11.522170   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:11.531923   14525 docker.go:304] Loading image: /var/lib/minikube/images/pause_3.7
	I0320 17:15:11.531935   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.7 | docker load"
	I0320 17:15:11.534761   14525 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.1" does not exist at hash "7c5896a75862a8bc252122185a929cec1393db2c525f6440137d4fbf46bbf6f9" in container runtime
	I0320 17:15:11.534780   14525 docker.go:337] Removing image: registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:11.534828   14525 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.24.1
	I0320 17:15:11.549511   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:11.562573   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:11.569674   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:11.571138   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:11.590336   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1
	I0320 17:15:11.590360   14525 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.1" needs transfer: "registry.k8s.io/kube-proxy:v1.24.1" does not exist at hash "fcbd620bbac080b658602c597709b377cb2b3fec134a097a27f94cba9b2ed2fa" in container runtime
	I0320 17:15:11.590375   14525 docker.go:337] Removing image: registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:11.590418   14525 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.24.1
	I0320 17:15:11.590426   14525 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.1" does not exist at hash "000c19baf6bba51ff7ae5449f4c7a16d9190cef0263f58070dbf62cea9c4982f" in container runtime
	I0320 17:15:11.590436   14525 docker.go:337] Removing image: registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:11.590458   14525 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.24.1
	I0320 17:15:11.590461   14525 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.7 from cache
	I0320 17:15:11.601150   14525 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.1" does not exist at hash "f61bbe9259d7caa580deb6c8e4bfd1780c7b5887efe3aa3adc7cc74f68a27c1b" in container runtime
	I0320 17:15:11.601177   14525 docker.go:337] Removing image: registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:11.601159   14525 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "a9a710bb96df080e6b9c720eb85dc5b832ff84abf77263548d74fedec6466a5a" in container runtime
	I0320 17:15:11.601231   14525 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.24.1
	I0320 17:15:11.601259   14525 docker.go:337] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:11.601275   14525 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.5.3-0
	I0320 17:15:11.610928   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.24.1
	I0320 17:15:11.610945   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.24.1
	I0320 17:15:11.626855   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.24.1
	I0320 17:15:11.626938   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0
	I0320 17:15:11.627028   14525 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I0320 17:15:11.628395   14525 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.5.3-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.5.3-0': No such file or directory
	I0320 17:15:11.628408   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 --> /var/lib/minikube/images/etcd_3.5.3-0 (81117184 bytes)
	I0320 17:15:11.790264   14525 docker.go:304] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0320 17:15:11.790285   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.5.3-0 | docker load"
	I0320 17:15:11.929174   14525 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.3-0 from cache
	W0320 17:15:12.068787   14525 image.go:265] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I0320 17:15:12.069022   14525 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:12.089261   14525 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I0320 17:15:12.089290   14525 docker.go:337] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:12.089356   14525 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:15:12.108285   14525 cache_images.go:286] Loading image from: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0320 17:15:12.108427   14525 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I0320 17:15:12.110064   14525 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I0320 17:15:12.110080   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I0320 17:15:12.136641   14525 docker.go:304] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0320 17:15:12.136655   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I0320 17:15:12.374264   14525 cache_images.go:315] Transferred and loaded /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0320 17:15:12.374301   14525 cache_images.go:92] duration metric: took 2.967848875s to LoadCachedImages
	W0320 17:15:12.374664   14525 out.go:239] X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	X Unable to load cached images: loading cached images: stat /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.24.1: no such file or directory
	I0320 17:15:12.374672   14525 kubeadm.go:928] updating node { 10.0.2.15 8443 v1.24.1 docker true true} ...
	I0320 17:15:12.374722   14525 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=stopped-upgrade-184000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=10.0.2.15
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0320 17:15:12.374781   14525 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0320 17:15:12.388882   14525 cni.go:84] Creating CNI manager for ""
	I0320 17:15:12.388894   14525 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:15:12.388899   14525 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0320 17:15:12.388907   14525 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:10.0.2.15 APIServerPort:8443 KubernetesVersion:v1.24.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-184000 NodeName:stopped-upgrade-184000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "10.0.2.15"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:10.0.2.15 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0320 17:15:12.388979   14525 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 10.0.2.15
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "stopped-upgrade-184000"
	  kubeletExtraArgs:
	    node-ip: 10.0.2.15
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "10.0.2.15"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0320 17:15:12.389036   14525 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.1
	I0320 17:15:12.392107   14525 binaries.go:44] Found k8s binaries, skipping transfer
	I0320 17:15:12.392142   14525 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0320 17:15:12.394532   14525 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0320 17:15:12.399308   14525 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0320 17:15:12.404048   14525 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2096 bytes)
	I0320 17:15:12.409453   14525 ssh_runner.go:195] Run: grep 10.0.2.15	control-plane.minikube.internal$ /etc/hosts
	I0320 17:15:12.410733   14525 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "10.0.2.15	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0320 17:15:12.414261   14525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:15:12.478803   14525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0320 17:15:12.485462   14525 certs.go:68] Setting up /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000 for IP: 10.0.2.15
	I0320 17:15:12.485472   14525 certs.go:194] generating shared ca certs ...
	I0320 17:15:12.485481   14525 certs.go:226] acquiring lock for ca certs: {Name:mkd2fbb5d6b83ab66860b229253f0a2f91e28e51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:12.485949   14525 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.key
	I0320 17:15:12.486105   14525 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/proxy-client-ca.key
	I0320 17:15:12.486111   14525 certs.go:256] generating profile certs ...
	I0320 17:15:12.486317   14525 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/client.key
	I0320 17:15:12.486334   14525 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.key.9625c259
	I0320 17:15:12.486346   14525 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.crt.9625c259 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 10.0.2.15]
	I0320 17:15:12.576715   14525 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.crt.9625c259 ...
	I0320 17:15:12.576726   14525 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.crt.9625c259: {Name:mk0468bba07dc9aca965836ab6af72acc62992ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:12.577015   14525 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.key.9625c259 ...
	I0320 17:15:12.577024   14525 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.key.9625c259: {Name:mkd8a41300bff8629eed00d001fa8df356524ad9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:12.577148   14525 certs.go:381] copying /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.crt.9625c259 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.crt
	I0320 17:15:12.577283   14525 certs.go:385] copying /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.key.9625c259 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.key
	I0320 17:15:12.577612   14525 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/proxy-client.key
	I0320 17:15:12.577758   14525 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/12665.pem (1338 bytes)
	W0320 17:15:12.577928   14525 certs.go:480] ignoring /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/12665_empty.pem, impossibly tiny 0 bytes
	I0320 17:15:12.577934   14525 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca-key.pem (1675 bytes)
	I0320 17:15:12.577954   14525 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem (1082 bytes)
	I0320 17:15:12.577972   14525 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem (1123 bytes)
	I0320 17:15:12.577989   14525 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/key.pem (1679 bytes)
	I0320 17:15:12.578026   14525 certs.go:484] found cert: /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem (1708 bytes)
	I0320 17:15:12.578646   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0320 17:15:12.585420   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0320 17:15:12.591924   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0320 17:15:12.598120   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0320 17:15:12.604737   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0320 17:15:12.610979   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0320 17:15:12.617920   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0320 17:15:12.625299   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0320 17:15:12.632627   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0320 17:15:12.638963   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/12665.pem --> /usr/share/ca-certificates/12665.pem (1338 bytes)
	I0320 17:15:12.645708   14525 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/ssl/certs/126652.pem --> /usr/share/ca-certificates/126652.pem (1708 bytes)
	I0320 17:15:12.652728   14525 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0320 17:15:12.657873   14525 ssh_runner.go:195] Run: openssl version
	I0320 17:15:12.659725   14525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0320 17:15:12.662571   14525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0320 17:15:12.664134   14525 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 21 00:14 /usr/share/ca-certificates/minikubeCA.pem
	I0320 17:15:12.664156   14525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0320 17:15:12.665943   14525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0320 17:15:12.669319   14525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12665.pem && ln -fs /usr/share/ca-certificates/12665.pem /etc/ssl/certs/12665.pem"
	I0320 17:15:12.672500   14525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12665.pem
	I0320 17:15:12.674022   14525 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 21 00:02 /usr/share/ca-certificates/12665.pem
	I0320 17:15:12.674052   14525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12665.pem
	I0320 17:15:12.675954   14525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12665.pem /etc/ssl/certs/51391683.0"
	I0320 17:15:12.678811   14525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/126652.pem && ln -fs /usr/share/ca-certificates/126652.pem /etc/ssl/certs/126652.pem"
	I0320 17:15:12.682099   14525 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/126652.pem
	I0320 17:15:12.683737   14525 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 21 00:02 /usr/share/ca-certificates/126652.pem
	I0320 17:15:12.683759   14525 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/126652.pem
	I0320 17:15:12.685511   14525 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/126652.pem /etc/ssl/certs/3ec20f2e.0"
	I0320 17:15:12.688817   14525 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0320 17:15:12.690255   14525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0320 17:15:12.692567   14525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0320 17:15:12.694417   14525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0320 17:15:12.696613   14525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0320 17:15:12.698422   14525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0320 17:15:12.700266   14525 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0320 17:15:12.702255   14525 kubeadm.go:391] StartCluster: {Name:stopped-upgrade-184000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:52304 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:st
opped-upgrade-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0320 17:15:12.702322   14525 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0320 17:15:12.714522   14525 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0320 17:15:12.717814   14525 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0320 17:15:12.717820   14525 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0320 17:15:12.717823   14525 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0320 17:15:12.717840   14525 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0320 17:15:12.721163   14525 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0320 17:15:12.721196   14525 kubeconfig.go:47] verify endpoint returned: get endpoint: "stopped-upgrade-184000" does not appear in /Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:15:12.721210   14525 kubeconfig.go:62] /Users/jenkins/minikube-integration/17713-12248/kubeconfig needs updating (will repair): [kubeconfig missing "stopped-upgrade-184000" cluster setting kubeconfig missing "stopped-upgrade-184000" context setting]
	I0320 17:15:12.721932   14525 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/kubeconfig: {Name:mk87d9a95abf455216479b90714c3c5b079a6861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:15:12.723769   14525 kapi.go:59] client config for stopped-upgrade-184000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/client.key", CAFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105922a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0320 17:15:12.728402   14525 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0320 17:15:12.731114   14525 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -11,7 +11,7 @@
	       - signing
	       - authentication
	 nodeRegistration:
	-  criSocket: /var/run/cri-dockerd.sock
	+  criSocket: unix:///var/run/cri-dockerd.sock
	   name: "stopped-upgrade-184000"
	   kubeletExtraArgs:
	     node-ip: 10.0.2.15
	@@ -49,7 +49,9 @@
	 authentication:
	   x509:
	     clientCAFile: /var/lib/minikube/certs/ca.crt
	-cgroupDriver: systemd
	+cgroupDriver: cgroupfs
	+hairpinMode: hairpin-veth
	+runtimeRequestTimeout: 15m
	 clusterDomain: "cluster.local"
	 # disable disk resource management by default
	 imageGCHighThresholdPercent: 100
	
	-- /stdout --
	I0320 17:15:12.731119   14525 kubeadm.go:1154] stopping kube-system containers ...
	I0320 17:15:12.731155   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0320 17:15:12.741809   14525 docker.go:483] Stopping containers: [fd0f420578d3 00d63bb5d94b cd1b8729cc02 cd64e17cdade b0cb64b9aeb4 24c5f7227f44 0ad0b2b977a2 3017c60fa3bc]
	I0320 17:15:12.741875   14525 ssh_runner.go:195] Run: docker stop fd0f420578d3 00d63bb5d94b cd1b8729cc02 cd64e17cdade b0cb64b9aeb4 24c5f7227f44 0ad0b2b977a2 3017c60fa3bc
	I0320 17:15:12.752551   14525 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0320 17:15:12.757768   14525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0320 17:15:12.761099   14525 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0320 17:15:12.761105   14525 kubeadm.go:156] found existing configuration files:
	
	I0320 17:15:12.761126   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/admin.conf
	I0320 17:15:12.763769   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0320 17:15:12.763788   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0320 17:15:12.766232   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/kubelet.conf
	I0320 17:15:12.769265   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0320 17:15:12.769292   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0320 17:15:12.772061   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/controller-manager.conf
	I0320 17:15:12.774471   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0320 17:15:12.774492   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0320 17:15:12.777568   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/scheduler.conf
	I0320 17:15:12.780649   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0320 17:15:12.780669   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0320 17:15:12.783314   14525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0320 17:15:12.786233   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:12.807804   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:13.244402   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:13.359009   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:13.384252   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0320 17:15:13.407667   14525 api_server.go:52] waiting for apiserver process to appear ...
	I0320 17:15:13.407747   14525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:13.909769   14525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:14.409787   14525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:15:14.414947   14525 api_server.go:72] duration metric: took 1.007320084s to wait for apiserver process to appear ...
	I0320 17:15:14.414959   14525 api_server.go:88] waiting for apiserver healthz status ...
	I0320 17:15:14.415708   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:19.417306   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:19.417358   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:24.417649   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:24.417714   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:29.418048   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:29.418074   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:34.418512   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:34.418536   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:39.419095   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:39.419131   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:44.420392   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:44.420414   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:49.421590   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:49.421638   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:54.422868   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:54.422970   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:15:59.425244   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:15:59.425294   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:04.425961   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:04.426016   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:09.428141   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:09.428205   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:14.430459   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:14.431977   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:14.467800   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:16:14.467936   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:14.488660   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:16:14.488766   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:14.503520   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:16:14.503605   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:14.516539   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:16:14.516617   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:14.527982   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:16:14.528051   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:14.538283   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:16:14.538353   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:14.548968   14525 logs.go:276] 0 containers: []
	W0320 17:16:14.548981   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:14.549035   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:14.559932   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:16:14.559949   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:16:14.559955   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:16:14.574212   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:16:14.574220   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:16:14.585846   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:16:14.585856   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:16:14.598121   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:16:14.598135   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:14.609898   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:14.609908   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:14.646983   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:14.646992   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:14.650939   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:14.650945   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:14.761586   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:16:14.761599   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:16:14.772602   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:16:14.772613   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:16:14.790682   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:16:14.790692   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:16:14.817542   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:16:14.817551   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:16:14.837886   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:16:14.837897   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:16:14.849811   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:16:14.849822   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:16:14.864500   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:16:14.864515   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:16:14.878991   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:16:14.879000   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:16:14.896369   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:16:14.896380   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:16:14.907630   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:14.907654   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:17.432813   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:22.434907   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:22.435273   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:22.468226   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:16:22.468352   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:22.488017   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:16:22.488109   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:22.502614   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:16:22.502687   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:22.514724   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:16:22.514805   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:22.525956   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:16:22.526016   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:22.536823   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:16:22.536881   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:22.547224   14525 logs.go:276] 0 containers: []
	W0320 17:16:22.547235   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:22.547294   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:22.558272   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:16:22.558289   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:22.558306   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:22.582643   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:16:22.582653   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:16:22.594256   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:16:22.594266   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:16:22.608518   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:16:22.608531   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:16:22.622830   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:16:22.622845   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:16:22.634728   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:16:22.634739   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:16:22.649988   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:16:22.650002   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:16:22.661858   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:16:22.661873   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:16:22.673702   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:22.673712   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:22.678276   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:22.678281   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:22.714661   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:16:22.714671   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:16:22.728552   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:16:22.728560   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:16:22.739594   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:16:22.739603   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:16:22.757389   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:16:22.757405   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:16:22.768661   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:22.768669   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:22.807046   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:16:22.807055   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:22.819716   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:16:22.819729   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:16:25.350753   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:30.351186   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:30.351408   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:30.368362   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:16:30.368444   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:30.382056   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:16:30.382129   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:30.398889   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:16:30.398950   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:30.409816   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:16:30.409890   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:30.420639   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:16:30.420704   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:30.440035   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:16:30.440097   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:30.453838   14525 logs.go:276] 0 containers: []
	W0320 17:16:30.453856   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:30.453911   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:30.464618   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:16:30.464644   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:30.464652   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:30.501115   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:16:30.501125   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:16:30.515508   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:16:30.515518   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:16:30.529323   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:16:30.529331   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:16:30.541145   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:30.541157   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:30.580199   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:16:30.580210   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:16:30.591771   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:16:30.591780   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:16:30.607225   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:16:30.607235   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:16:30.627931   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:30.627942   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:30.632009   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:16:30.632016   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:16:30.656482   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:16:30.656494   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:16:30.668297   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:16:30.668310   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:16:30.683906   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:16:30.683918   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:16:30.702076   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:16:30.702086   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:16:30.714241   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:16:30.714250   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:16:30.726353   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:30.726363   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:30.750386   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:16:30.750396   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:33.264359   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:38.266471   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:38.266737   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:38.286672   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:16:38.286770   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:38.303272   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:16:38.303356   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:38.314833   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:16:38.314906   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:38.325480   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:16:38.325548   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:38.335719   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:16:38.335793   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:38.346228   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:16:38.346315   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:38.356504   14525 logs.go:276] 0 containers: []
	W0320 17:16:38.356519   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:38.356578   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:38.367139   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:16:38.367161   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:38.367166   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:38.390398   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:16:38.390406   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:16:38.401779   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:16:38.401794   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:16:38.429597   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:16:38.429610   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:16:38.443241   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:16:38.443250   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:16:38.457596   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:16:38.457609   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:16:38.474897   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:16:38.474907   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:38.486425   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:38.486437   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:38.524417   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:38.524432   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:38.529265   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:16:38.529274   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:16:38.540252   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:16:38.540265   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:16:38.555569   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:16:38.555579   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:16:38.567545   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:16:38.567556   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:16:38.578969   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:38.578979   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:38.618288   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:16:38.618299   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:16:38.630701   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:16:38.630712   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:16:38.641747   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:16:38.641758   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:16:41.157179   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:46.159331   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:46.159589   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:46.181934   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:16:46.182035   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:46.197968   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:16:46.198047   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:46.210549   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:16:46.210630   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:46.221416   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:16:46.221492   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:46.232344   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:16:46.232417   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:46.242734   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:16:46.242800   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:46.252685   14525 logs.go:276] 0 containers: []
	W0320 17:16:46.252695   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:46.252763   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:46.263272   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:16:46.263289   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:46.263297   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:46.301149   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:16:46.301160   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:16:46.315181   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:16:46.315190   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:16:46.336862   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:16:46.336871   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:16:46.351917   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:46.351927   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:46.375656   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:16:46.375663   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:46.387858   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:16:46.387869   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:16:46.409023   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:46.409035   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:46.413563   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:16:46.413570   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:16:46.438417   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:16:46.438429   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:16:46.452583   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:16:46.452596   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:16:46.471334   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:16:46.471347   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:16:46.482867   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:16:46.482878   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:16:46.495181   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:16:46.495193   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:16:46.506517   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:16:46.506529   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:16:46.519456   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:46.519466   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:46.558106   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:16:46.558122   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:16:49.075680   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:16:54.077977   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:16:54.078487   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:16:54.115368   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:16:54.115504   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:16:54.136364   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:16:54.136458   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:16:54.151148   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:16:54.151222   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:16:54.167452   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:16:54.167533   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:16:54.178590   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:16:54.178657   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:16:54.188976   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:16:54.189047   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:16:54.199232   14525 logs.go:276] 0 containers: []
	W0320 17:16:54.199242   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:16:54.199309   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:16:54.209887   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:16:54.209918   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:16:54.209924   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:16:54.221787   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:16:54.221797   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:16:54.234910   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:16:54.234922   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:16:54.247561   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:16:54.247570   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:16:54.258849   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:16:54.258860   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:16:54.283195   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:16:54.283204   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:16:54.308082   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:16:54.308093   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:16:54.312877   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:16:54.312885   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:16:54.347914   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:16:54.347925   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:16:54.362051   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:16:54.362063   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:16:54.376138   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:16:54.376148   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:16:54.388400   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:16:54.388410   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:16:54.404207   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:16:54.404218   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:16:54.415951   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:16:54.415966   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:16:54.455024   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:16:54.455033   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:16:54.477058   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:16:54.477069   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:16:54.488352   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:16:54.488363   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:16:57.006001   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:02.008502   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:02.008857   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:02.048425   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:02.048553   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:02.066139   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:02.066233   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:02.078987   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:02.079060   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:02.091101   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:02.091187   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:02.101751   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:02.101821   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:02.112265   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:02.112327   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:02.122802   14525 logs.go:276] 0 containers: []
	W0320 17:17:02.122814   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:02.122875   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:02.133242   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:02.133259   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:02.133265   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:02.144801   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:02.144811   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:02.169818   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:02.169828   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:02.183758   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:02.183768   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:02.195240   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:02.195249   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:02.206776   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:02.206785   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:02.217994   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:02.218004   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:02.222233   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:02.222239   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:02.236222   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:02.236230   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:02.247285   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:02.247303   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:02.262319   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:02.262329   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:02.284304   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:02.284313   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:17:02.301565   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:02.301575   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:02.335612   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:02.335625   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:02.349221   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:02.349234   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:02.360945   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:02.360956   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:02.400511   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:02.400522   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:04.927331   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:09.928131   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:09.928336   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:09.946408   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:09.946512   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:09.960804   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:09.960885   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:09.972606   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:09.972689   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:09.982996   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:09.983070   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:09.992811   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:09.992872   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:10.007400   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:10.007469   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:10.018430   14525 logs.go:276] 0 containers: []
	W0320 17:17:10.018444   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:10.018503   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:10.029092   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:10.029110   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:10.029116   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:10.040899   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:10.040912   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:10.075680   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:10.075692   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:10.088001   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:10.088014   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:17:10.106772   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:10.106782   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:10.118332   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:10.118342   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:10.132598   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:10.132609   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:10.156954   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:10.156962   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:10.182354   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:10.182365   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:10.196438   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:10.196450   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:10.211097   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:10.211109   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:10.222646   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:10.222657   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:10.234255   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:10.234268   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:10.245564   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:10.245577   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:10.284095   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:10.284103   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:10.288854   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:10.288863   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:10.306091   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:10.306101   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:12.822897   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:17.824663   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:17.824795   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:17.835807   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:17.835881   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:17.847096   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:17.847158   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:17.857637   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:17.857697   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:17.869115   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:17.869185   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:17.880018   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:17.880089   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:17.891044   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:17.891120   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:17.900933   14525 logs.go:276] 0 containers: []
	W0320 17:17:17.900943   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:17.901003   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:17.911503   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:17.911521   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:17.911526   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:17.915678   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:17.915687   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:17.940245   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:17.940255   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:17.953736   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:17.953746   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:17.968727   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:17.968740   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:17.980184   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:17.980194   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:17.994636   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:17.994645   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:17:18.011797   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:18.011808   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:18.022959   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:18.022971   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:18.046294   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:18.046302   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:18.057738   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:18.057750   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:18.094870   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:18.094878   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:18.130682   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:18.130692   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:18.144604   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:18.144615   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:18.155698   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:18.155707   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:18.171946   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:18.171956   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:18.183132   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:18.183145   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:20.698253   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:25.698480   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:25.698660   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:25.713348   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:25.713430   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:25.724997   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:25.725071   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:25.735754   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:25.735819   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:25.747591   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:25.747656   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:25.759131   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:25.759203   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:25.769709   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:25.769776   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:25.779844   14525 logs.go:276] 0 containers: []
	W0320 17:17:25.779857   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:25.779909   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:25.790389   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:25.790407   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:25.790413   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:25.794727   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:25.794734   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:25.809395   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:25.809406   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:25.820741   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:25.820753   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:25.832344   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:25.832353   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:25.843678   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:25.843688   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:25.868452   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:25.868462   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:25.903516   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:25.903526   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:25.917844   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:25.917854   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:25.929815   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:25.929828   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:25.941206   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:25.941216   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:25.952355   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:25.952367   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:25.989097   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:25.989105   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:26.003637   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:26.003648   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:26.018818   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:26.018829   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:17:26.042833   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:26.042843   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:26.067960   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:26.067970   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:28.581621   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:33.583751   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:33.583885   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:33.602191   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:33.602271   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:33.615765   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:33.615839   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:33.626620   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:33.626692   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:33.636935   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:33.637004   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:33.653567   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:33.653639   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:33.664190   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:33.664248   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:33.674590   14525 logs.go:276] 0 containers: []
	W0320 17:17:33.674604   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:33.674658   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:33.684992   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:33.685009   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:33.685015   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:33.696547   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:33.696561   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:33.707608   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:33.707620   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:33.733420   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:33.733428   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:33.737442   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:33.737449   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:33.757108   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:33.757123   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:33.782748   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:33.782762   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:33.797630   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:33.797642   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:33.809270   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:33.809282   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:33.845181   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:33.845192   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:33.856470   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:33.856481   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:33.870194   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:33.870207   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:33.885969   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:33.885981   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:17:33.903451   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:33.903461   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:33.941217   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:33.941228   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:33.957447   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:33.957457   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:33.969324   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:33.969338   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:36.482964   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:41.485177   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:41.485364   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:41.497320   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:41.497399   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:41.507649   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:41.507715   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:41.518013   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:41.518083   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:41.528461   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:41.528525   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:41.538920   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:41.538982   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:41.549189   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:41.549255   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:41.559525   14525 logs.go:276] 0 containers: []
	W0320 17:17:41.559536   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:41.559590   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:41.574461   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:41.574478   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:41.574482   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:41.598999   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:41.599017   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:41.638532   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:41.638549   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:41.673650   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:41.673663   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:41.685844   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:41.685854   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:41.697493   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:41.697504   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:41.710983   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:41.710993   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:17:41.728521   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:41.728533   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:41.741602   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:41.743748   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:41.755272   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:41.755284   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:41.767321   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:41.767331   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:41.778890   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:41.778901   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:41.782903   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:41.782910   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:41.807552   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:41.807562   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:41.821677   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:41.821687   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:41.835685   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:41.835696   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:41.852269   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:41.852278   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:44.366300   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:49.367558   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:49.367920   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:49.392859   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:49.392979   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:49.409724   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:49.409806   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:49.428725   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:49.428799   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:49.439152   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:49.439220   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:49.449893   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:49.449961   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:49.460162   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:49.460223   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:49.470457   14525 logs.go:276] 0 containers: []
	W0320 17:17:49.470468   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:49.470526   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:49.481309   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:49.481326   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:49.481332   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:49.495316   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:49.495328   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:49.506829   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:49.506839   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:49.517765   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:49.517775   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:49.554643   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:49.554659   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:49.568366   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:49.568379   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:49.582010   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:49.582020   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:17:49.599122   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:49.599131   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:49.603466   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:49.603473   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:49.641074   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:49.641086   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:49.657026   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:49.657038   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:49.669364   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:49.669379   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:49.684756   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:49.684769   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:49.696339   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:49.696351   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:49.710116   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:49.710126   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:49.732765   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:49.732775   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:49.759908   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:49.759918   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:52.274547   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:17:57.276729   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:17:57.277105   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:17:57.320102   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:17:57.320229   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:17:57.344302   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:17:57.344388   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:17:57.358304   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:17:57.358370   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:17:57.369758   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:17:57.369831   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:17:57.381176   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:17:57.381245   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:17:57.393889   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:17:57.393954   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:17:57.404138   14525 logs.go:276] 0 containers: []
	W0320 17:17:57.404148   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:17:57.404203   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:17:57.414601   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:17:57.414618   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:17:57.414624   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:17:57.430802   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:17:57.430812   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:17:57.442619   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:17:57.442628   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:17:57.454266   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:17:57.454277   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:17:57.477103   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:17:57.477112   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:17:57.513537   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:17:57.513546   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:17:57.534910   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:17:57.534921   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:17:57.552056   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:17:57.552067   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:17:57.564269   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:17:57.564283   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:17:57.576493   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:17:57.576502   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:17:57.611689   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:17:57.611700   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:17:57.626103   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:17:57.626114   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:17:57.651798   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:17:57.651810   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:17:57.665921   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:17:57.665931   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:17:57.676976   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:17:57.676986   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:17:57.681316   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:17:57.681325   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:17:57.692964   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:17:57.692976   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:00.213904   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:05.215629   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:05.215959   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:05.244303   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:18:05.244429   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:05.261197   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:18:05.261279   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:05.274655   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:18:05.274723   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:05.286769   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:18:05.286844   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:05.297408   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:18:05.297469   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:05.307903   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:18:05.307967   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:05.317705   14525 logs.go:276] 0 containers: []
	W0320 17:18:05.317715   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:05.317766   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:05.332098   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:18:05.332114   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:18:05.332120   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:18:05.357471   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:18:05.357486   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:18:05.372214   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:05.372223   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:05.394966   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:18:05.394974   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:05.406511   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:05.406529   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:05.443964   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:18:05.443975   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:18:05.455806   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:18:05.455816   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:18:05.467281   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:05.467295   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:05.471476   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:18:05.471482   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:18:05.485088   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:18:05.485098   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:18:05.500177   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:18:05.500187   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:18:05.512711   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:18:05.512722   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:05.529309   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:18:05.529319   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:18:05.542148   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:05.542159   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:05.578261   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:18:05.578272   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:18:05.592138   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:18:05.592149   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:18:05.604049   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:18:05.604065   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:18:08.117892   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:13.119891   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:13.120084   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:13.140515   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:18:13.140611   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:13.154935   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:18:13.155002   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:13.167207   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:18:13.167281   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:13.178165   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:18:13.178226   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:13.188789   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:18:13.188858   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:13.199664   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:18:13.199738   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:13.209901   14525 logs.go:276] 0 containers: []
	W0320 17:18:13.209912   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:13.209968   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:13.220755   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:18:13.220772   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:13.220777   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:13.260143   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:18:13.260158   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:18:13.274200   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:18:13.274212   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:18:13.285859   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:18:13.285872   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:18:13.298026   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:18:13.298036   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:18:13.322755   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:18:13.322765   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:18:13.337754   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:18:13.337765   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:18:13.357050   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:18:13.357061   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:13.371102   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:18:13.371113   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:18:13.383057   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:13.383067   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:13.406336   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:13.406347   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:13.442940   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:18:13.442951   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:18:13.457840   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:18:13.457851   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:18:13.469688   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:18:13.469699   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:13.487766   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:13.487775   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:13.492100   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:18:13.492106   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:18:13.503819   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:18:13.503829   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:18:16.015604   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:21.017681   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:21.018101   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:21.050821   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:18:21.050941   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:21.069322   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:18:21.069422   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:21.084123   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:18:21.084201   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:21.095577   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:18:21.095641   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:21.105907   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:18:21.105976   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:21.116098   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:18:21.116162   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:21.125903   14525 logs.go:276] 0 containers: []
	W0320 17:18:21.125914   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:21.125970   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:21.136471   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:18:21.136489   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:18:21.136494   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:21.149815   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:18:21.149827   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:18:21.169747   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:18:21.169758   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:18:21.183798   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:18:21.183807   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:18:21.195236   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:18:21.195251   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:21.212765   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:18:21.212779   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:18:21.224606   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:18:21.224615   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:18:21.249063   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:18:21.249073   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:18:21.263320   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:18:21.263330   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:18:21.274653   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:21.274664   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:21.278725   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:21.278734   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:21.312521   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:18:21.312531   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:18:21.323857   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:18:21.323867   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:18:21.338499   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:18:21.338508   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:18:21.349964   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:21.349980   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:21.387021   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:18:21.387029   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:18:21.401892   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:21.401901   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:23.927532   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:28.929669   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:28.930061   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:28.967176   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:18:28.967309   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:28.990228   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:18:28.990332   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:29.005427   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:18:29.005519   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:29.017926   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:18:29.017996   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:29.029402   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:18:29.029474   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:29.041389   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:18:29.041462   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:29.053418   14525 logs.go:276] 0 containers: []
	W0320 17:18:29.053433   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:29.053514   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:29.069019   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:18:29.069039   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:18:29.069044   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:29.086852   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:18:29.086862   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:18:29.098588   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:18:29.098599   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:18:29.109768   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:29.109778   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:29.146791   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:18:29.146815   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:18:29.171116   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:18:29.171129   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:18:29.182759   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:18:29.182772   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:18:29.193930   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:18:29.193941   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:29.206260   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:29.206279   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:29.210504   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:18:29.210511   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:18:29.224794   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:18:29.224808   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:18:29.240737   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:18:29.240750   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:18:29.257188   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:18:29.257198   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:18:29.271091   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:18:29.271105   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:18:29.282606   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:29.282614   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:29.304955   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:29.304963   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:29.342024   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:18:29.342036   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:18:31.858491   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:36.860558   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:36.860953   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:36.893857   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:18:36.894003   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:36.913150   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:18:36.913243   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:36.928640   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:18:36.928722   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:36.940655   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:18:36.940724   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:36.951158   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:18:36.951223   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:36.967133   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:18:36.967203   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:36.977431   14525 logs.go:276] 0 containers: []
	W0320 17:18:36.977440   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:36.977495   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:36.988454   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:18:36.988471   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:36.988476   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:37.011167   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:37.011175   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:37.048504   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:18:37.048519   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:18:37.066711   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:18:37.066728   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:18:37.098256   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:18:37.098272   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:37.120610   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:18:37.120623   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:37.132759   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:37.132772   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:37.136782   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:18:37.136792   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:18:37.150475   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:18:37.150486   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:18:37.163066   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:18:37.163077   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:18:37.174614   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:18:37.174624   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:18:37.198786   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:18:37.198797   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:18:37.214824   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:18:37.214834   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:18:37.230744   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:18:37.230753   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:18:37.242986   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:18:37.242995   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:18:37.254763   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:37.254774   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:37.293542   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:18:37.293552   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:18:39.810096   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:44.811434   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:44.811911   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:44.847049   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:18:44.847175   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:44.864323   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:18:44.864415   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:44.878003   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:18:44.878076   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:44.889677   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:18:44.889751   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:44.900325   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:18:44.900394   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:44.910744   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:18:44.910804   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:44.921216   14525 logs.go:276] 0 containers: []
	W0320 17:18:44.921226   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:44.921289   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:44.931617   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:18:44.931634   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:44.931640   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:44.970147   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:44.970156   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:45.017655   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:18:45.017666   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:18:45.029104   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:18:45.029119   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:18:45.040530   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:18:45.040539   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:18:45.052076   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:45.052087   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:45.057007   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:18:45.057015   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:18:45.072510   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:18:45.072520   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:18:45.087694   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:18:45.087705   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:45.100228   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:18:45.100240   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:18:45.112695   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:18:45.112707   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:18:45.126920   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:18:45.126930   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:18:45.151553   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:18:45.151567   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:18:45.166180   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:18:45.166190   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:18:45.181133   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:18:45.181143   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:45.198535   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:18:45.198545   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:18:45.210016   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:45.210027   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:47.734000   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:18:52.736121   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:18:52.736435   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:18:52.772612   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:18:52.772744   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:18:52.794838   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:18:52.794930   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:18:52.808425   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:18:52.808499   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:18:52.820115   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:18:52.820190   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:18:52.830603   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:18:52.830671   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:18:52.840956   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:18:52.841021   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:18:52.856779   14525 logs.go:276] 0 containers: []
	W0320 17:18:52.856790   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:18:52.856846   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:18:52.867394   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:18:52.867411   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:18:52.867416   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:18:52.903390   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:18:52.903401   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:18:52.929245   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:18:52.929255   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:18:52.940400   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:18:52.940409   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:18:52.952198   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:18:52.952208   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:18:52.974228   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:18:52.974235   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:18:53.010920   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:18:53.010927   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:18:53.026683   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:18:53.026692   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:18:53.038147   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:18:53.038159   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:18:53.049740   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:18:53.049749   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:18:53.061163   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:18:53.061172   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:18:53.074246   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:18:53.074259   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:18:53.091989   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:18:53.092001   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:18:53.103544   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:18:53.103554   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:18:53.107763   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:18:53.107770   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:18:53.129620   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:18:53.129630   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:18:53.143937   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:18:53.143950   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:18:55.661115   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:00.663131   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:00.663377   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:19:00.687107   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:19:00.687220   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:19:00.703989   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:19:00.704054   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:19:00.716590   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:19:00.716658   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:19:00.727652   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:19:00.727722   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:19:00.740035   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:19:00.740099   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:19:00.754830   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:19:00.754904   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:19:00.769632   14525 logs.go:276] 0 containers: []
	W0320 17:19:00.769645   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:19:00.769700   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:19:00.780194   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:19:00.780212   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:19:00.780217   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:19:00.792129   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:19:00.792139   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:19:00.809567   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:19:00.809578   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:19:00.832371   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:19:00.832378   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:19:00.846926   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:19:00.846936   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:19:00.885595   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:19:00.885607   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:19:00.910653   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:19:00.910665   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:19:00.949029   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:19:00.949041   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:19:00.960772   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:19:00.960786   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:19:00.972583   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:19:00.972594   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:19:00.988198   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:19:00.988210   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:19:00.999814   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:19:00.999824   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:19:01.015020   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:19:01.015032   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:19:01.025834   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:19:01.025845   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:19:01.037508   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:19:01.037518   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:19:01.058234   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:19:01.058244   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:19:01.072378   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:19:01.072388   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:19:03.577489   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:08.579483   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:08.579621   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:19:08.593782   14525 logs.go:276] 2 containers: [f26b26c5947f cd64e17cdade]
	I0320 17:19:08.593867   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:19:08.605266   14525 logs.go:276] 2 containers: [b9df94347965 00d63bb5d94b]
	I0320 17:19:08.605335   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:19:08.616112   14525 logs.go:276] 1 containers: [1337447e24d0]
	I0320 17:19:08.616185   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:19:08.626555   14525 logs.go:276] 2 containers: [e40a5d4a1750 fd0f420578d3]
	I0320 17:19:08.626626   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:19:08.636968   14525 logs.go:276] 1 containers: [c849c0fc8399]
	I0320 17:19:08.637034   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:19:08.647602   14525 logs.go:276] 2 containers: [3c32e0a0f45c cd1b8729cc02]
	I0320 17:19:08.647665   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:19:08.658079   14525 logs.go:276] 0 containers: []
	W0320 17:19:08.658088   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:19:08.658141   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:19:08.669278   14525 logs.go:276] 2 containers: [244615c35a1b cf71bc29ca91]
	I0320 17:19:08.669294   14525 logs.go:123] Gathering logs for kube-controller-manager [3c32e0a0f45c] ...
	I0320 17:19:08.669299   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3c32e0a0f45c"
	I0320 17:19:08.686604   14525 logs.go:123] Gathering logs for storage-provisioner [244615c35a1b] ...
	I0320 17:19:08.686614   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 244615c35a1b"
	I0320 17:19:08.699827   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:19:08.699839   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:19:08.711928   14525 logs.go:123] Gathering logs for etcd [00d63bb5d94b] ...
	I0320 17:19:08.711938   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 00d63bb5d94b"
	I0320 17:19:08.726592   14525 logs.go:123] Gathering logs for coredns [1337447e24d0] ...
	I0320 17:19:08.726605   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1337447e24d0"
	I0320 17:19:08.738883   14525 logs.go:123] Gathering logs for kube-scheduler [fd0f420578d3] ...
	I0320 17:19:08.738897   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 fd0f420578d3"
	I0320 17:19:08.754280   14525 logs.go:123] Gathering logs for kube-proxy [c849c0fc8399] ...
	I0320 17:19:08.754291   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 c849c0fc8399"
	I0320 17:19:08.765953   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:19:08.765965   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:19:08.806217   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:19:08.806227   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:19:08.810465   14525 logs.go:123] Gathering logs for kube-apiserver [cd64e17cdade] ...
	I0320 17:19:08.810474   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd64e17cdade"
	I0320 17:19:08.834583   14525 logs.go:123] Gathering logs for storage-provisioner [cf71bc29ca91] ...
	I0320 17:19:08.834592   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cf71bc29ca91"
	I0320 17:19:08.845848   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:19:08.845858   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:19:08.867518   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:19:08.867525   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:19:08.902436   14525 logs.go:123] Gathering logs for kube-apiserver [f26b26c5947f] ...
	I0320 17:19:08.902447   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 f26b26c5947f"
	I0320 17:19:08.916884   14525 logs.go:123] Gathering logs for etcd [b9df94347965] ...
	I0320 17:19:08.916894   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 b9df94347965"
	I0320 17:19:08.930864   14525 logs.go:123] Gathering logs for kube-scheduler [e40a5d4a1750] ...
	I0320 17:19:08.930875   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 e40a5d4a1750"
	I0320 17:19:08.943619   14525 logs.go:123] Gathering logs for kube-controller-manager [cd1b8729cc02] ...
	I0320 17:19:08.943631   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 cd1b8729cc02"
	I0320 17:19:11.457570   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:16.459590   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:16.459672   14525 kubeadm.go:591] duration metric: took 4m3.751348083s to restartPrimaryControlPlane
	W0320 17:19:16.459744   14525 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0320 17:19:16.459775   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I0320 17:19:17.498914   14525 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": (1.03916725s)
	I0320 17:19:17.498990   14525 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0320 17:19:17.504302   14525 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0320 17:19:17.507321   14525 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0320 17:19:17.510128   14525 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0320 17:19:17.510135   14525 kubeadm.go:156] found existing configuration files:
	
	I0320 17:19:17.510161   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/admin.conf
	I0320 17:19:17.512772   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0320 17:19:17.512796   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0320 17:19:17.515627   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/kubelet.conf
	I0320 17:19:17.518841   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0320 17:19:17.518862   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0320 17:19:17.521977   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/controller-manager.conf
	I0320 17:19:17.524485   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0320 17:19:17.524510   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0320 17:19:17.527516   14525 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/scheduler.conf
	I0320 17:19:17.530542   14525 kubeadm.go:162] "https://control-plane.minikube.internal:52304" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:52304 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0320 17:19:17.530567   14525 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0320 17:19:17.533079   14525 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0320 17:19:17.549612   14525 kubeadm.go:309] [init] Using Kubernetes version: v1.24.1
	I0320 17:19:17.549691   14525 kubeadm.go:309] [preflight] Running pre-flight checks
	I0320 17:19:17.598444   14525 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0320 17:19:17.598507   14525 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0320 17:19:17.598569   14525 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0320 17:19:17.650951   14525 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0320 17:19:17.655162   14525 out.go:204]   - Generating certificates and keys ...
	I0320 17:19:17.655195   14525 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0320 17:19:17.655226   14525 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0320 17:19:17.655263   14525 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0320 17:19:17.655297   14525 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0320 17:19:17.655353   14525 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0320 17:19:17.655382   14525 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0320 17:19:17.655418   14525 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0320 17:19:17.655456   14525 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0320 17:19:17.655592   14525 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0320 17:19:17.655649   14525 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0320 17:19:17.655682   14525 kubeadm.go:309] [certs] Using the existing "sa" key
	I0320 17:19:17.655714   14525 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0320 17:19:17.746178   14525 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0320 17:19:17.828140   14525 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0320 17:19:17.917561   14525 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0320 17:19:18.109431   14525 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0320 17:19:18.139948   14525 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0320 17:19:18.140289   14525 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0320 17:19:18.140310   14525 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0320 17:19:18.214313   14525 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0320 17:19:18.222462   14525 out.go:204]   - Booting up control plane ...
	I0320 17:19:18.222517   14525 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0320 17:19:18.222553   14525 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0320 17:19:18.222585   14525 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0320 17:19:18.222623   14525 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0320 17:19:18.222717   14525 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0320 17:19:22.720869   14525 kubeadm.go:309] [apiclient] All control plane components are healthy after 4.501795 seconds
	I0320 17:19:22.720983   14525 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0320 17:19:22.727398   14525 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0320 17:19:23.236734   14525 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0320 17:19:23.236837   14525 kubeadm.go:309] [mark-control-plane] Marking the node stopped-upgrade-184000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0320 17:19:23.743003   14525 kubeadm.go:309] [bootstrap-token] Using token: 5mp7r1.8ewgf33bb9i5an87
	I0320 17:19:23.749864   14525 out.go:204]   - Configuring RBAC rules ...
	I0320 17:19:23.749931   14525 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0320 17:19:23.750003   14525 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0320 17:19:23.756367   14525 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0320 17:19:23.757336   14525 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0320 17:19:23.758425   14525 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0320 17:19:23.759162   14525 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0320 17:19:23.762575   14525 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0320 17:19:23.929849   14525 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0320 17:19:24.147474   14525 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0320 17:19:24.147823   14525 kubeadm.go:309] 
	I0320 17:19:24.147856   14525 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0320 17:19:24.147861   14525 kubeadm.go:309] 
	I0320 17:19:24.147896   14525 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0320 17:19:24.147899   14525 kubeadm.go:309] 
	I0320 17:19:24.147909   14525 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0320 17:19:24.147937   14525 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0320 17:19:24.147965   14525 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0320 17:19:24.147968   14525 kubeadm.go:309] 
	I0320 17:19:24.147994   14525 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0320 17:19:24.147998   14525 kubeadm.go:309] 
	I0320 17:19:24.148021   14525 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0320 17:19:24.148024   14525 kubeadm.go:309] 
	I0320 17:19:24.148050   14525 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0320 17:19:24.148176   14525 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0320 17:19:24.148215   14525 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0320 17:19:24.148219   14525 kubeadm.go:309] 
	I0320 17:19:24.148288   14525 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0320 17:19:24.148377   14525 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0320 17:19:24.148394   14525 kubeadm.go:309] 
	I0320 17:19:24.148457   14525 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5mp7r1.8ewgf33bb9i5an87 \
	I0320 17:19:24.148524   14525 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:185f26a629853f25fceceb0266b7da38a73499cb5a874e544b63fe7b511dff93 \
	I0320 17:19:24.148537   14525 kubeadm.go:309] 	--control-plane 
	I0320 17:19:24.148540   14525 kubeadm.go:309] 
	I0320 17:19:24.148584   14525 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0320 17:19:24.148587   14525 kubeadm.go:309] 
	I0320 17:19:24.148632   14525 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5mp7r1.8ewgf33bb9i5an87 \
	I0320 17:19:24.148704   14525 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:185f26a629853f25fceceb0266b7da38a73499cb5a874e544b63fe7b511dff93 
	I0320 17:19:24.148823   14525 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0320 17:19:24.148837   14525 cni.go:84] Creating CNI manager for ""
	I0320 17:19:24.148845   14525 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:19:24.153772   14525 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0320 17:19:24.163795   14525 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0320 17:19:24.167355   14525 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0320 17:19:24.172805   14525 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0320 17:19:24.172868   14525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0320 17:19:24.173119   14525 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.24.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes stopped-upgrade-184000 minikube.k8s.io/updated_at=2024_03_20T17_19_24_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=9b0fc36eb9ca8470d9076ea1f55dc7d08f560c77 minikube.k8s.io/name=stopped-upgrade-184000 minikube.k8s.io/primary=true
	I0320 17:19:24.176217   14525 ops.go:34] apiserver oom_adj: -16
	I0320 17:19:24.217650   14525 kubeadm.go:1107] duration metric: took 44.831666ms to wait for elevateKubeSystemPrivileges
	W0320 17:19:24.217867   14525 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0320 17:19:24.217872   14525 kubeadm.go:393] duration metric: took 4m11.52542775s to StartCluster
	I0320 17:19:24.217882   14525 settings.go:142] acquiring lock: {Name:mkcbd6581de6bc632d1ec3032ba4410c2461597a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:19:24.218014   14525 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:19:24.218650   14525 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/kubeconfig: {Name:mk87d9a95abf455216479b90714c3c5b079a6861 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:19:24.218987   14525 start.go:234] Will wait 6m0s for node &{Name: IP:10.0.2.15 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:19:24.223803   14525 out.go:177] * Verifying Kubernetes components...
	I0320 17:19:24.219016   14525 config.go:182] Loaded profile config "stopped-upgrade-184000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0320 17:19:24.219166   14525 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0320 17:19:24.229741   14525 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-184000"
	I0320 17:19:24.229785   14525 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-184000"
	I0320 17:19:24.229800   14525 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0320 17:19:24.229808   14525 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-184000"
	I0320 17:19:24.229898   14525 addons.go:234] Setting addon storage-provisioner=true in "stopped-upgrade-184000"
	W0320 17:19:24.229905   14525 addons.go:243] addon storage-provisioner should already be in state true
	I0320 17:19:24.229922   14525 host.go:66] Checking if "stopped-upgrade-184000" exists ...
	I0320 17:19:24.231045   14525 kapi.go:59] client config for stopped-upgrade-184000: &rest.Config{Host:"https://10.0.2.15:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/stopped-upgrade-184000/client.key", CAFile:"/Users/jenkins/minikube-integration/17713-12248/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[
]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x105922a80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0320 17:19:24.231410   14525 addons.go:234] Setting addon default-storageclass=true in "stopped-upgrade-184000"
	W0320 17:19:24.231416   14525 addons.go:243] addon default-storageclass should already be in state true
	I0320 17:19:24.231423   14525 host.go:66] Checking if "stopped-upgrade-184000" exists ...
	I0320 17:19:24.235734   14525 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0320 17:19:24.239822   14525 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0320 17:19:24.239828   14525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0320 17:19:24.239834   14525 sshutil.go:53] new ssh client: &{IP:localhost Port:52272 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/id_rsa Username:docker}
	I0320 17:19:24.240559   14525 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0320 17:19:24.240565   14525 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0320 17:19:24.240569   14525 sshutil.go:53] new ssh client: &{IP:localhost Port:52272 SSHKeyPath:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/stopped-upgrade-184000/id_rsa Username:docker}
	I0320 17:19:24.305328   14525 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0320 17:19:24.310186   14525 api_server.go:52] waiting for apiserver process to appear ...
	I0320 17:19:24.310225   14525 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0320 17:19:24.313936   14525 api_server.go:72] duration metric: took 94.939875ms to wait for apiserver process to appear ...
	I0320 17:19:24.313944   14525 api_server.go:88] waiting for apiserver healthz status ...
	I0320 17:19:24.313951   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:24.334160   14525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0320 17:19:24.335906   14525 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0320 17:19:29.314252   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:29.314326   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:34.315677   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:34.315718   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:39.315765   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:39.315837   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:44.315940   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:44.315971   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:49.316232   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:49.316268   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:19:54.316637   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:54.316676   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	W0320 17:19:54.704166   14525 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error listing StorageClasses: Get "https://10.0.2.15:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 10.0.2.15:8443: i/o timeout]
	I0320 17:19:54.709363   14525 out.go:177] * Enabled addons: storage-provisioner
	I0320 17:19:54.717332   14525 addons.go:505] duration metric: took 30.499510542s for enable addons: enabled=[storage-provisioner]
	I0320 17:19:59.317206   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:19:59.317244   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:04.317988   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:04.318017   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:09.318953   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:09.318982   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:14.320207   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:14.320251   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:19.321846   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:19.321877   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:24.323867   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:24.323983   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:24.343216   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:20:24.343309   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:24.355618   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:20:24.355694   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:24.365598   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:20:24.365667   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:24.376212   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:20:24.376280   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:24.386636   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:20:24.386707   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:24.397093   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:20:24.397156   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:24.407443   14525 logs.go:276] 0 containers: []
	W0320 17:20:24.407454   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:24.407511   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:24.419063   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:20:24.419079   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:20:24.419084   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:20:24.430486   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:24.430498   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:24.453847   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:24.453856   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:24.488378   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:24.488386   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:24.492298   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:20:24.492306   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:20:24.506920   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:20:24.506928   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:20:24.518039   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:20:24.518047   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:20:24.529447   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:20:24.529457   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:20:24.551026   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:20:24.551041   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:20:24.562253   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:24.562262   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:24.601234   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:20:24.601245   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:20:24.619630   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:20:24.619645   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:20:24.631393   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:20:24.631406   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:20:27.148990   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:32.151118   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:32.151299   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:32.164799   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:20:32.164883   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:32.176290   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:20:32.176368   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:32.186682   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:20:32.186751   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:32.197087   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:20:32.197153   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:32.207742   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:20:32.207810   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:32.218693   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:20:32.218754   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:32.228560   14525 logs.go:276] 0 containers: []
	W0320 17:20:32.228572   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:32.228630   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:32.239246   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:20:32.239261   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:20:32.239266   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:20:32.250910   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:20:32.250920   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:20:32.261702   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:20:32.261712   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:20:32.275963   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:20:32.275973   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:20:32.295718   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:20:32.295729   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:20:32.307132   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:32.307143   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:32.341533   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:32.341549   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:32.346436   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:20:32.346443   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:20:32.359982   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:20:32.359992   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:20:32.371416   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:32.371427   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:32.394532   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:32.394539   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:32.429198   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:20:32.429210   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:20:32.443812   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:20:32.443832   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:20:34.957242   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:39.959438   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:39.959636   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:39.982320   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:20:39.982394   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:39.995000   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:20:39.995064   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:40.007881   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:20:40.007946   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:40.026672   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:20:40.026737   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:40.037142   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:20:40.037206   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:40.048122   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:20:40.048188   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:40.058602   14525 logs.go:276] 0 containers: []
	W0320 17:20:40.058612   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:40.058668   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:40.069143   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:20:40.069159   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:40.069165   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:40.105040   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:40.105049   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:40.141918   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:20:40.141933   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:20:40.156325   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:20:40.156335   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:20:40.167999   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:20:40.168009   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:20:40.185597   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:20:40.185608   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:20:40.206652   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:20:40.206660   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:20:40.218215   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:40.218225   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:40.242268   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:40.242275   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:40.246311   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:20:40.246317   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:20:40.263078   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:20:40.263091   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:20:40.274900   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:20:40.274915   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:20:40.286797   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:20:40.286807   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:20:42.799913   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:47.801989   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:47.802139   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:47.817360   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:20:47.817443   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:47.829960   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:20:47.830028   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:47.840853   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:20:47.840916   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:47.851196   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:20:47.851254   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:47.862085   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:20:47.862144   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:47.872813   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:20:47.872886   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:47.883089   14525 logs.go:276] 0 containers: []
	W0320 17:20:47.883101   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:47.883163   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:47.894740   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:20:47.894756   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:47.894761   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:47.899424   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:20:47.899450   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:20:47.914132   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:20:47.914142   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:20:47.927418   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:20:47.927428   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:20:47.945487   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:20:47.945517   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:20:47.960864   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:20:47.960875   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:20:47.985171   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:20:47.985182   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:20:47.996926   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:47.996936   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:48.032394   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:48.032404   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:48.077594   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:20:48.077604   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:20:48.091781   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:20:48.091795   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:20:48.105218   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:20:48.105231   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:20:48.117345   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:48.117356   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:50.642863   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:20:55.644083   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:20:55.644231   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:20:55.655176   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:20:55.655258   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:20:55.665792   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:20:55.665865   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:20:55.680930   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:20:55.680995   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:20:55.691315   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:20:55.691379   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:20:55.701675   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:20:55.701745   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:20:55.712237   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:20:55.712302   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:20:55.723276   14525 logs.go:276] 0 containers: []
	W0320 17:20:55.723286   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:20:55.723340   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:20:55.733318   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:20:55.733332   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:20:55.733337   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:20:55.745620   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:20:55.745644   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:20:55.768630   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:20:55.768638   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:20:55.780322   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:20:55.780331   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:20:55.794682   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:20:55.794693   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:20:55.808343   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:20:55.808352   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:20:55.823194   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:20:55.823203   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:20:55.834711   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:20:55.834722   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:20:55.852080   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:20:55.852092   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:20:55.886335   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:20:55.886343   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:20:55.890379   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:20:55.890386   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:20:55.933052   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:20:55.933063   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:20:55.950630   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:20:55.950640   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:20:58.469240   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:03.471487   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:03.471812   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:03.506821   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:03.506945   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:03.527473   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:03.527569   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:03.543253   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:21:03.543321   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:03.555099   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:03.555164   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:03.565519   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:03.565586   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:03.576209   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:03.576275   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:03.587730   14525 logs.go:276] 0 containers: []
	W0320 17:21:03.587741   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:03.587801   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:03.603426   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:03.603443   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:03.603447   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:03.615337   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:03.615351   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:03.627070   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:03.627080   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:03.650653   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:03.650662   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:03.654844   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:03.654851   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:03.669209   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:03.669220   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:03.683359   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:03.683369   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:03.695152   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:03.695163   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:03.710248   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:03.710258   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:03.722023   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:03.722039   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:03.743511   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:03.743522   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:03.754989   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:03.754999   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:03.791426   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:03.791479   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:06.328697   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:11.330780   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:11.330958   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:11.346563   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:11.346636   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:11.359044   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:11.359114   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:11.369868   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:21:11.369932   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:11.380534   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:11.380601   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:11.391040   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:11.391103   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:11.401847   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:11.401905   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:11.412537   14525 logs.go:276] 0 containers: []
	W0320 17:21:11.412547   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:11.412595   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:11.430049   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:11.430063   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:11.430068   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:11.441322   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:11.441332   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:11.453015   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:11.453026   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:11.469624   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:11.469635   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:11.481182   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:11.481193   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:11.498735   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:11.498745   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:11.510360   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:11.510370   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:11.545704   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:11.545715   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:11.549763   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:11.549771   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:11.585326   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:11.585335   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:11.619185   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:11.619196   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:11.633685   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:11.633694   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:11.658697   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:11.658707   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:14.173070   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:19.174043   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:19.174156   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:19.189359   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:19.189438   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:19.201403   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:19.201464   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:19.212292   14525 logs.go:276] 2 containers: [2dfb2c043f8e 2177fc854340]
	I0320 17:21:19.212358   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:19.222711   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:19.222775   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:19.233169   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:19.233230   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:19.243358   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:19.243419   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:19.253407   14525 logs.go:276] 0 containers: []
	W0320 17:21:19.253417   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:19.253471   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:19.263769   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:19.263784   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:19.263790   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:19.285510   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:19.285519   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:19.300419   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:19.300429   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:19.312441   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:19.312452   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:19.324015   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:19.324025   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:19.348183   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:19.348194   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:19.359439   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:19.359450   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:19.373789   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:19.373809   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:19.387497   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:19.387508   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:19.422130   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:19.422143   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:19.433502   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:19.433511   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:19.451176   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:19.451186   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:19.487073   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:19.487081   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:21.993278   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:26.993476   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:26.993601   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:27.008444   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:27.008518   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:27.020760   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:27.020832   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:27.031567   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:21:27.031646   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:27.042097   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:27.042159   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:27.052198   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:27.052272   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:27.062807   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:27.062879   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:27.073101   14525 logs.go:276] 0 containers: []
	W0320 17:21:27.073112   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:27.073170   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:27.083517   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:27.083533   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:27.083539   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:27.088098   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:21:27.088107   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:21:27.099433   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:27.099448   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:27.111434   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:21:27.111445   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:21:27.125416   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:27.125429   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:27.147376   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:27.147385   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:27.182279   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:27.182288   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:27.217507   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:27.217526   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:27.231736   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:27.231746   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:27.243428   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:27.243438   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:27.259716   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:27.259725   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:27.272069   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:27.272079   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:27.287090   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:27.287101   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:27.299462   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:27.299473   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:27.324465   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:27.324474   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:29.837848   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:34.839808   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:34.839963   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:34.860973   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:34.861050   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:34.871722   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:34.871792   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:34.882295   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:21:34.882365   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:34.892824   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:34.892894   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:34.902859   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:34.902927   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:34.913670   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:34.913735   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:34.923673   14525 logs.go:276] 0 containers: []
	W0320 17:21:34.923683   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:34.923739   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:34.934091   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:34.934111   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:34.934116   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:34.953479   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:34.953492   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:34.965085   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:34.965098   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:34.990456   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:34.990467   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:35.004966   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:21:35.004978   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:21:35.016031   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:35.016043   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:35.027789   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:35.027801   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:35.039366   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:35.039377   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:35.043809   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:35.043818   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:35.077937   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:21:35.077948   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:21:35.089306   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:35.089316   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:35.103198   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:35.103210   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:35.121031   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:35.121043   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:35.132092   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:35.132102   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:35.167052   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:35.167062   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:37.683778   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:42.685981   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:42.686134   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:42.704777   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:42.704872   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:42.724416   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:42.724497   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:42.736093   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:21:42.736163   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:42.746935   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:42.746999   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:42.763864   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:42.763928   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:42.774776   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:42.774836   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:42.785031   14525 logs.go:276] 0 containers: []
	W0320 17:21:42.785043   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:42.785103   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:42.796232   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:42.796252   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:21:42.796258   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:21:42.807644   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:42.807656   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:42.833788   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:42.833797   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:42.847930   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:42.847945   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:42.860407   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:42.860415   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:42.873356   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:42.873366   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:42.885048   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:42.885058   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:42.889719   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:21:42.889725   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:21:42.901159   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:42.901169   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:42.912816   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:42.912827   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:42.931886   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:42.931895   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:42.966583   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:42.966599   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:43.006377   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:43.006392   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:43.024298   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:43.024307   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:43.038893   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:43.038906   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:45.552800   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:50.555036   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:50.555239   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:50.568429   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:50.568503   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:50.579047   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:50.579120   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:50.589802   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:21:50.589867   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:50.603899   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:50.603958   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:50.614475   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:50.614529   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:50.625269   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:50.625326   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:50.635176   14525 logs.go:276] 0 containers: []
	W0320 17:21:50.635189   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:50.635252   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:50.646106   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:50.646129   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:50.646134   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:50.664312   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:50.664328   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:50.678762   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:50.678772   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:50.694985   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:50.694994   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:50.730849   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:50.730859   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:50.765700   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:21:50.765711   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:21:50.778254   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:50.778264   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:50.790505   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:50.790516   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:50.806604   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:50.806614   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:50.832829   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:50.832838   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:50.856946   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:50.856955   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:21:50.861233   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:50.861244   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:50.875175   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:50.875183   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:50.889524   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:21:50.889536   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:21:50.900965   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:50.900981   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:53.415340   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:21:58.417410   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:21:58.417554   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:21:58.433222   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:21:58.433313   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:21:58.451986   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:21:58.452057   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:21:58.462970   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:21:58.463040   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:21:58.473921   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:21:58.473988   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:21:58.484201   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:21:58.484265   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:21:58.494995   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:21:58.495059   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:21:58.505072   14525 logs.go:276] 0 containers: []
	W0320 17:21:58.505082   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:21:58.505135   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:21:58.515283   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:21:58.515298   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:21:58.515302   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:21:58.527518   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:21:58.527531   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:21:58.550966   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:21:58.550977   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:21:58.562757   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:21:58.562769   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:21:58.574720   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:21:58.574730   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:21:58.586941   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:21:58.586954   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:21:58.598823   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:21:58.598836   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:21:58.612974   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:21:58.612986   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:21:58.628479   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:21:58.628489   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:21:58.640256   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:21:58.640267   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:21:58.652800   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:21:58.652811   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:21:58.688156   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:21:58.688170   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:21:58.706659   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:21:58.706669   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:21:58.723915   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:21:58.723929   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:21:58.757855   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:21:58.757863   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:01.263816   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:06.265957   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:06.266228   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:06.293443   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:22:06.293557   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:06.319449   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:22:06.319524   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:06.333726   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:22:06.333798   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:06.346513   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:22:06.346582   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:06.357168   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:22:06.357234   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:06.371734   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:22:06.371798   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:06.381707   14525 logs.go:276] 0 containers: []
	W0320 17:22:06.381721   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:06.381779   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:06.392407   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:22:06.392425   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:22:06.392430   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:22:06.403984   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:22:06.403997   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:22:06.415178   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:22:06.415187   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:22:06.432602   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:22:06.432611   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:06.444210   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:06.444219   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:06.468824   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:06.468832   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:06.504404   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:06.504413   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:06.508729   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:22:06.508738   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:22:06.522681   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:22:06.522693   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:22:06.534242   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:22:06.534256   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:22:06.546020   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:22:06.546031   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:22:06.557775   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:06.557785   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:06.593334   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:22:06.593344   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:22:06.607441   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:22:06.607451   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:22:06.619119   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:22:06.619130   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:22:09.135969   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:14.137123   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:14.137344   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:14.158348   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:22:14.158441   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:14.173191   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:22:14.173259   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:14.187565   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:22:14.187638   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:14.197981   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:22:14.198047   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:14.208571   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:22:14.208636   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:14.223893   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:22:14.223962   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:14.234638   14525 logs.go:276] 0 containers: []
	W0320 17:22:14.234649   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:14.234704   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:14.248327   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:22:14.248344   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:22:14.248350   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:22:14.260056   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:14.260067   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:14.285305   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:22:14.285315   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:22:14.299587   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:22:14.299597   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:22:14.310818   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:22:14.310828   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:22:14.321923   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:22:14.321933   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:22:14.333846   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:22:14.333855   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:22:14.352785   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:22:14.352795   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:22:14.367751   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:22:14.367763   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:22:14.386655   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:22:14.386668   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:22:14.398151   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:22:14.398161   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:14.409445   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:14.409455   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:14.443547   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:14.443555   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:14.447816   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:14.447825   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:14.482637   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:22:14.482650   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:22:16.996099   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:21.998284   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:21.998530   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:22.025654   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:22:22.025736   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:22.041116   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:22:22.041193   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:22.053911   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:22:22.053981   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:22.064302   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:22:22.064378   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:22.077370   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:22:22.077443   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:22.087971   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:22:22.088040   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:22.102892   14525 logs.go:276] 0 containers: []
	W0320 17:22:22.102904   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:22.102963   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:22.113677   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:22:22.113694   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:22:22.113699   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:22:22.125501   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:22:22.125512   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:22:22.140253   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:22.140264   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:22.145072   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:22:22.145081   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:22:22.159118   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:22:22.159129   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:22:22.170669   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:22:22.170679   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:22:22.188300   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:22:22.188310   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:22.200579   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:22:22.200588   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:22:22.215935   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:22.215944   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:22.240769   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:22.240777   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:22.276431   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:22.276438   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:22.312063   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:22:22.312074   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:22:22.326421   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:22:22.326432   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:22:22.341393   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:22:22.341405   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:22:22.354003   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:22:22.354015   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:22:24.867945   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:29.869985   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:29.870208   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:29.898453   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:22:29.898585   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:29.917265   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:22:29.917359   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:29.936630   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:22:29.936707   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:29.948974   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:22:29.949041   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:29.960315   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:22:29.960384   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:29.972822   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:22:29.972886   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:29.984059   14525 logs.go:276] 0 containers: []
	W0320 17:22:29.984074   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:29.984131   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:29.995250   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:22:29.995268   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:29.995275   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:30.030504   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:22:30.030515   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:22:30.046054   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:30.046067   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:30.050241   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:22:30.050250   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:22:30.063932   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:22:30.063941   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:22:30.075723   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:22:30.075735   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:22:30.087315   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:22:30.087326   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:22:30.105131   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:22:30.105143   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:22:30.119981   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:22:30.119993   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:22:30.130948   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:22:30.130958   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:22:30.143329   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:22:30.143342   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:22:30.155116   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:22:30.155124   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:22:30.166345   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:30.166357   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:30.199844   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:30.199858   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:30.224520   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:22:30.224530   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:32.738219   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:37.740215   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:37.740387   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:37.757419   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:22:37.757505   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:37.778702   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:22:37.778777   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:37.790350   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:22:37.790428   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:37.800881   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:22:37.800945   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:37.811853   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:22:37.811920   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:37.822446   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:22:37.822512   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:37.832405   14525 logs.go:276] 0 containers: []
	W0320 17:22:37.832417   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:37.832473   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:37.842618   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:22:37.842639   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:22:37.842646   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:22:37.854446   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:22:37.854457   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:22:37.871968   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:22:37.871978   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:37.884221   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:37.884231   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:37.920351   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:22:37.920361   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:22:37.931972   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:22:37.931982   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:22:37.945390   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:22:37.945403   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:22:37.957857   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:37.957866   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:37.992471   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:22:37.992484   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:22:38.007225   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:22:38.007235   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:22:38.021262   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:22:38.021276   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:22:38.038578   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:38.038588   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:38.062972   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:38.062982   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:38.067760   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:22:38.067767   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:22:38.079127   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:22:38.079141   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:22:40.604586   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:45.606725   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:45.607066   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:45.646469   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:22:45.646578   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:45.665588   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:22:45.665664   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:45.679946   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:22:45.680017   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:45.691860   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:22:45.691922   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:45.702329   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:22:45.702402   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:45.713096   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:22:45.713161   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:45.730106   14525 logs.go:276] 0 containers: []
	W0320 17:22:45.730120   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:45.730179   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:45.740747   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:22:45.740773   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:45.740777   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:45.745351   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:22:45.745358   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:22:45.761853   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:22:45.761862   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:22:45.776816   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:22:45.776826   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:22:45.788944   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:22:45.788956   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:22:45.809311   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:22:45.809320   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:22:45.820885   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:22:45.820906   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:45.832729   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:45.832738   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:45.857523   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:45.857532   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:45.893081   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:22:45.893091   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:22:45.907387   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:22:45.907398   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:22:45.920084   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:22:45.920097   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:22:45.932706   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:22:45.932718   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:22:45.945895   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:45.945905   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:45.981687   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:22:45.981698   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:22:48.498735   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:22:53.500721   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:22:53.500948   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:22:53.521555   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:22:53.521645   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:22:53.536569   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:22:53.536641   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:22:53.548578   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:22:53.548649   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:22:53.559559   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:22:53.559631   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:22:53.570077   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:22:53.570144   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:22:53.580375   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:22:53.580438   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:22:53.594933   14525 logs.go:276] 0 containers: []
	W0320 17:22:53.594945   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:22:53.595002   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:22:53.605576   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:22:53.605591   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:22:53.605596   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:22:53.616853   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:22:53.616862   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:22:53.652577   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:22:53.652588   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:22:53.657019   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:22:53.657028   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:22:53.668052   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:22:53.668066   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:22:53.682097   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:22:53.682106   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:22:53.694014   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:22:53.694027   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:22:53.710100   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:22:53.710109   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:22:53.721760   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:22:53.721773   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:22:53.733663   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:22:53.733675   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:22:53.768122   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:22:53.768131   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:22:53.782671   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:22:53.782682   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:22:53.798362   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:22:53.798374   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:22:53.812888   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:22:53.812900   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:22:53.833812   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:22:53.833822   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:22:56.360135   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:01.362103   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:01.362263   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:01.378594   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:23:01.378672   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:01.389389   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:23:01.389451   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:01.400232   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:23:01.400305   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:01.414012   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:23:01.414077   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:01.424506   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:23:01.424578   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:01.435091   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:23:01.435176   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:01.445448   14525 logs.go:276] 0 containers: []
	W0320 17:23:01.445459   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:01.445516   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:01.455734   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:23:01.455750   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:23:01.455754   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:23:01.467174   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:23:01.467187   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:23:01.478613   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:23:01.478622   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:23:01.490151   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:23:01.490162   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:23:01.502617   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:23:01.502628   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:23:01.519456   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:01.519466   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:01.524299   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:23:01.524307   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:23:01.538216   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:23:01.538229   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:23:01.549624   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:23:01.549635   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:23:01.564569   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:23:01.564580   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:23:01.579315   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:01.579326   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:01.603098   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:23:01.603109   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:01.614459   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:01.614470   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:01.649600   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:23:01.649613   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:23:01.666245   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:01.666256   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:23:04.201634   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:09.203725   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:09.203895   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:09.223911   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:23:09.223988   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:09.238803   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:23:09.238890   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:09.250346   14525 logs.go:276] 4 containers: [bbc57dcc3a9f 2796479991d6 2dfb2c043f8e 2177fc854340]
	I0320 17:23:09.250413   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:09.260631   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:23:09.260709   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:09.271531   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:23:09.271598   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:09.284281   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:23:09.284343   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:09.296085   14525 logs.go:276] 0 containers: []
	W0320 17:23:09.296097   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:09.296152   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:09.311556   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:23:09.311577   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:09.311582   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:23:09.345797   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:23:09.345807   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:23:09.359297   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:23:09.359307   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:23:09.377487   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:23:09.377501   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:23:09.389010   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:09.389024   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:09.412945   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:23:09.412953   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:23:09.424458   14525 logs.go:123] Gathering logs for coredns [2dfb2c043f8e] ...
	I0320 17:23:09.424468   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2dfb2c043f8e"
	I0320 17:23:09.436020   14525 logs.go:123] Gathering logs for coredns [2177fc854340] ...
	I0320 17:23:09.436032   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2177fc854340"
	I0320 17:23:09.447912   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:23:09.447921   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:23:09.466710   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:23:09.466719   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:23:09.481211   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:23:09.481220   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:09.492973   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:23:09.492989   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:23:09.506404   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:09.506414   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:09.541381   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:09.541389   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:09.545465   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:23:09.545471   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:23:12.061766   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:17.063903   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:17.064141   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0320 17:23:17.081687   14525 logs.go:276] 1 containers: [0d16ab6d3809]
	I0320 17:23:17.081780   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0320 17:23:17.095158   14525 logs.go:276] 1 containers: [6ff50c37b28a]
	I0320 17:23:17.095229   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0320 17:23:17.107260   14525 logs.go:276] 4 containers: [1e63d2cf88f0 5e7f30c8db04 bbc57dcc3a9f 2796479991d6]
	I0320 17:23:17.107321   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0320 17:23:17.118445   14525 logs.go:276] 1 containers: [2c8d4e0b645e]
	I0320 17:23:17.118515   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0320 17:23:17.129359   14525 logs.go:276] 1 containers: [0fb2d91fdfc3]
	I0320 17:23:17.129425   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0320 17:23:17.140107   14525 logs.go:276] 1 containers: [ef10db954845]
	I0320 17:23:17.140165   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0320 17:23:17.151098   14525 logs.go:276] 0 containers: []
	W0320 17:23:17.151108   14525 logs.go:278] No container was found matching "kindnet"
	I0320 17:23:17.151154   14525 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I0320 17:23:17.165032   14525 logs.go:276] 1 containers: [64e49cd1d0b3]
	I0320 17:23:17.165050   14525 logs.go:123] Gathering logs for kubelet ...
	I0320 17:23:17.165055   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0320 17:23:17.200432   14525 logs.go:123] Gathering logs for dmesg ...
	I0320 17:23:17.200440   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0320 17:23:17.204819   14525 logs.go:123] Gathering logs for etcd [6ff50c37b28a] ...
	I0320 17:23:17.204827   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 6ff50c37b28a"
	I0320 17:23:17.218629   14525 logs.go:123] Gathering logs for kube-controller-manager [ef10db954845] ...
	I0320 17:23:17.218639   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ef10db954845"
	I0320 17:23:17.236698   14525 logs.go:123] Gathering logs for describe nodes ...
	I0320 17:23:17.236707   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0320 17:23:17.270499   14525 logs.go:123] Gathering logs for coredns [2796479991d6] ...
	I0320 17:23:17.270512   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2796479991d6"
	I0320 17:23:17.282489   14525 logs.go:123] Gathering logs for Docker ...
	I0320 17:23:17.282502   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0320 17:23:17.306673   14525 logs.go:123] Gathering logs for coredns [1e63d2cf88f0] ...
	I0320 17:23:17.306683   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 1e63d2cf88f0"
	I0320 17:23:17.318777   14525 logs.go:123] Gathering logs for kube-scheduler [2c8d4e0b645e] ...
	I0320 17:23:17.318788   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2c8d4e0b645e"
	I0320 17:23:17.333256   14525 logs.go:123] Gathering logs for container status ...
	I0320 17:23:17.333266   14525 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0320 17:23:17.346006   14525 logs.go:123] Gathering logs for kube-apiserver [0d16ab6d3809] ...
	I0320 17:23:17.346017   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0d16ab6d3809"
	I0320 17:23:17.359873   14525 logs.go:123] Gathering logs for coredns [5e7f30c8db04] ...
	I0320 17:23:17.359883   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5e7f30c8db04"
	I0320 17:23:17.370827   14525 logs.go:123] Gathering logs for coredns [bbc57dcc3a9f] ...
	I0320 17:23:17.370841   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 bbc57dcc3a9f"
	I0320 17:23:17.382600   14525 logs.go:123] Gathering logs for kube-proxy [0fb2d91fdfc3] ...
	I0320 17:23:17.382609   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 0fb2d91fdfc3"
	I0320 17:23:17.398636   14525 logs.go:123] Gathering logs for storage-provisioner [64e49cd1d0b3] ...
	I0320 17:23:17.398645   14525 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 64e49cd1d0b3"
	I0320 17:23:19.916922   14525 api_server.go:253] Checking apiserver healthz at https://10.0.2.15:8443/healthz ...
	I0320 17:23:24.918964   14525 api_server.go:269] stopped: https://10.0.2.15:8443/healthz: Get "https://10.0.2.15:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0320 17:23:24.924346   14525 out.go:177] 
	W0320 17:23:24.927464   14525 out.go:239] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: context deadline exceeded
	W0320 17:23:24.927477   14525 out.go:239] * 
	* 
	W0320 17:23:24.928648   14525 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:23:24.939294   14525 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:200: upgrade from v1.26.0 to HEAD failed: out/minikube-darwin-arm64 start -p stopped-upgrade-184000 --memory=2200 --alsologtostderr -v=1 --driver=qemu2 : exit status 80
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (616.23s)

                                                
                                    
x
+
TestPause/serial/Start (9.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-arm64 start -p pause-341000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p pause-341000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 : exit status 80 (9.888834792s)

                                                
                                                
-- stdout --
	* [pause-341000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "pause-341000" primary control-plane node in "pause-341000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "pause-341000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p pause-341000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-arm64 start -p pause-341000 --memory=2048 --install-addons=false --wait=all --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p pause-341000 -n pause-341000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p pause-341000 -n pause-341000: exit status 7 (53.454125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-341000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/Start (9.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (10.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-342000 --driver=qemu2 
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-342000 --driver=qemu2 : exit status 80 (10.076966083s)

                                                
                                                
-- stdout --
	* [NoKubernetes-342000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "NoKubernetes-342000" primary control-plane node in "NoKubernetes-342000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "NoKubernetes-342000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-342000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-342000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-342000 -n NoKubernetes-342000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-342000 -n NoKubernetes-342000: exit status 7 (67.641125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-342000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (10.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (5.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-342000 --no-kubernetes --driver=qemu2 
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-342000 --no-kubernetes --driver=qemu2 : exit status 80 (5.933958625s)

                                                
                                                
-- stdout --
	* [NoKubernetes-342000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-342000
	* Restarting existing qemu2 VM for "NoKubernetes-342000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-342000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-342000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-342000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-342000 -n NoKubernetes-342000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-342000 -n NoKubernetes-342000: exit status 7 (46.936042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-342000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (5.98s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.6s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (2.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-342000 --no-kubernetes --driver=qemu2 
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=17713
- KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current402020323/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-342000 --no-kubernetes --driver=qemu2 : exit status 80 (5.844767875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-342000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-342000
	* Restarting existing qemu2 VM for "NoKubernetes-342000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-342000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-342000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-342000 --no-kubernetes --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-342000 -n NoKubernetes-342000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-342000 -n NoKubernetes-342000: exit status 7 (35.056125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-342000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/Start (5.88s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.51s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.32.0 on darwin (arm64)
- MINIKUBE_LOCATION=17713
- KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-arm64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3147841894/001
* Using the hyperkit driver based on user configuration

                                                
                                                
X Exiting due to DRV_UNSUPPORTED_OS: The driver 'hyperkit' is not supported on darwin/arm64

                                                
                                                
driver_install_or_update_test.go:209: failed to run minikube. got: exit status 56
--- FAIL: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (2.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (5.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-342000 --driver=qemu2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-342000 --driver=qemu2 : exit status 80 (5.891603875s)

                                                
                                                
-- stdout --
	* [NoKubernetes-342000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-342000
	* Restarting existing qemu2 VM for "NoKubernetes-342000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "NoKubernetes-342000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p NoKubernetes-342000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-arm64 start -p NoKubernetes-342000 --driver=qemu2 " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-342000 -n NoKubernetes-342000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p NoKubernetes-342000 -n NoKubernetes-342000: exit status 7 (65.692875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-342000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (5.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p auto-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p auto-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=qemu2 : exit status 80 (9.815129833s)

                                                
                                                
-- stdout --
	* [auto-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "auto-670000" primary control-plane node in "auto-670000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "auto-670000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:25:14.241854   15039 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:25:14.241978   15039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:25:14.241982   15039 out.go:304] Setting ErrFile to fd 2...
	I0320 17:25:14.241984   15039 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:25:14.242117   15039 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:25:14.243172   15039 out.go:298] Setting JSON to false
	I0320 17:25:14.259540   15039 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8684,"bootTime":1710972030,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:25:14.259598   15039 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:25:14.265013   15039 out.go:177] * [auto-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:25:14.273040   15039 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:25:14.276941   15039 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:25:14.273089   15039 notify.go:220] Checking for updates...
	I0320 17:25:14.284015   15039 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:25:14.287966   15039 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:25:14.291027   15039 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:25:14.294059   15039 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:25:14.297442   15039 config.go:182] Loaded profile config "cert-expiration-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:25:14.297513   15039 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:25:14.297559   15039 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:25:14.301980   15039 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:25:14.308895   15039 start.go:297] selected driver: qemu2
	I0320 17:25:14.308901   15039 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:25:14.308907   15039 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:25:14.311176   15039 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:25:14.314943   15039 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:25:14.318070   15039 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:25:14.318113   15039 cni.go:84] Creating CNI manager for ""
	I0320 17:25:14.318120   15039 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:25:14.318124   15039 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0320 17:25:14.318158   15039 start.go:340] cluster config:
	{Name:auto-670000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:auto-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:dock
er CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:25:14.322801   15039 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:25:14.331077   15039 out.go:177] * Starting "auto-670000" primary control-plane node in "auto-670000" cluster
	I0320 17:25:14.333967   15039 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:25:14.333981   15039 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:25:14.333988   15039 cache.go:56] Caching tarball of preloaded images
	I0320 17:25:14.334044   15039 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:25:14.334050   15039 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:25:14.334112   15039 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/auto-670000/config.json ...
	I0320 17:25:14.334123   15039 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/auto-670000/config.json: {Name:mkf3cc3a7e87fb4e7af064989cee90e4bb79ae6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:25:14.334342   15039 start.go:360] acquireMachinesLock for auto-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:25:14.334375   15039 start.go:364] duration metric: took 26.708µs to acquireMachinesLock for "auto-670000"
	I0320 17:25:14.334388   15039 start.go:93] Provisioning new machine with config: &{Name:auto-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:auto-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:25:14.334415   15039 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:25:14.341904   15039 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:25:14.359811   15039 start.go:159] libmachine.API.Create for "auto-670000" (driver="qemu2")
	I0320 17:25:14.359844   15039 client.go:168] LocalClient.Create starting
	I0320 17:25:14.359919   15039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:25:14.359948   15039 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:14.359960   15039 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:14.360007   15039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:25:14.360034   15039 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:14.360042   15039 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:14.360401   15039 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:25:14.502844   15039 main.go:141] libmachine: Creating SSH key...
	I0320 17:25:14.606378   15039 main.go:141] libmachine: Creating Disk image...
	I0320 17:25:14.606386   15039 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:25:14.606599   15039 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/disk.qcow2
	I0320 17:25:14.618584   15039 main.go:141] libmachine: STDOUT: 
	I0320 17:25:14.618603   15039 main.go:141] libmachine: STDERR: 
	I0320 17:25:14.618655   15039 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/disk.qcow2 +20000M
	I0320 17:25:14.629453   15039 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:25:14.629468   15039 main.go:141] libmachine: STDERR: 
	I0320 17:25:14.629482   15039 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/disk.qcow2
	I0320 17:25:14.629486   15039 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:25:14.629524   15039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:93:c8:8d:20:43 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/disk.qcow2
	I0320 17:25:14.631139   15039 main.go:141] libmachine: STDOUT: 
	I0320 17:25:14.631152   15039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:25:14.631170   15039 client.go:171] duration metric: took 271.329208ms to LocalClient.Create
	I0320 17:25:16.631627   15039 start.go:128] duration metric: took 2.297267083s to createHost
	I0320 17:25:16.631715   15039 start.go:83] releasing machines lock for "auto-670000", held for 2.297398583s
	W0320 17:25:16.631842   15039 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:25:16.642936   15039 out.go:177] * Deleting "auto-670000" in qemu2 ...
	W0320 17:25:16.674060   15039 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:25:16.674085   15039 start.go:728] Will try again in 5 seconds ...
	I0320 17:25:21.676043   15039 start.go:360] acquireMachinesLock for auto-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:25:21.676509   15039 start.go:364] duration metric: took 320.167µs to acquireMachinesLock for "auto-670000"
	I0320 17:25:21.676633   15039 start.go:93] Provisioning new machine with config: &{Name:auto-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.29.3 ClusterName:auto-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:25:21.676967   15039 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:25:21.687458   15039 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:25:21.737285   15039 start.go:159] libmachine.API.Create for "auto-670000" (driver="qemu2")
	I0320 17:25:21.737345   15039 client.go:168] LocalClient.Create starting
	I0320 17:25:21.737461   15039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:25:21.737526   15039 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:21.737542   15039 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:21.737611   15039 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:25:21.737654   15039 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:21.737669   15039 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:21.738319   15039 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:25:21.891107   15039 main.go:141] libmachine: Creating SSH key...
	I0320 17:25:21.956784   15039 main.go:141] libmachine: Creating Disk image...
	I0320 17:25:21.956791   15039 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:25:21.957006   15039 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/disk.qcow2
	I0320 17:25:21.969494   15039 main.go:141] libmachine: STDOUT: 
	I0320 17:25:21.969516   15039 main.go:141] libmachine: STDERR: 
	I0320 17:25:21.969588   15039 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/disk.qcow2 +20000M
	I0320 17:25:21.980503   15039 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:25:21.980516   15039 main.go:141] libmachine: STDERR: 
	I0320 17:25:21.980534   15039 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/disk.qcow2
	I0320 17:25:21.980538   15039 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:25:21.980567   15039 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:c2:c4:a3:b1:63 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/auto-670000/disk.qcow2
	I0320 17:25:21.982277   15039 main.go:141] libmachine: STDOUT: 
	I0320 17:25:21.982292   15039 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:25:21.982302   15039 client.go:171] duration metric: took 244.961833ms to LocalClient.Create
	I0320 17:25:23.984513   15039 start.go:128] duration metric: took 2.307561709s to createHost
	I0320 17:25:23.984586   15039 start.go:83] releasing machines lock for "auto-670000", held for 2.308144375s
	W0320 17:25:23.984956   15039 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p auto-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p auto-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:25:23.999507   15039 out.go:177] 
	W0320 17:25:24.003720   15039 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:25:24.003751   15039 out.go:239] * 
	* 
	W0320 17:25:24.006299   15039 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:25:24.011575   15039 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kindnet-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kindnet-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=qemu2 : exit status 80 (9.813247875s)

                                                
                                                
-- stdout --
	* [kindnet-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kindnet-670000" primary control-plane node in "kindnet-670000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kindnet-670000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:25:26.324274   15150 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:25:26.324402   15150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:25:26.324406   15150 out.go:304] Setting ErrFile to fd 2...
	I0320 17:25:26.324408   15150 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:25:26.324522   15150 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:25:26.325553   15150 out.go:298] Setting JSON to false
	I0320 17:25:26.341821   15150 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8696,"bootTime":1710972030,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:25:26.341879   15150 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:25:26.347634   15150 out.go:177] * [kindnet-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:25:26.355585   15150 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:25:26.355647   15150 notify.go:220] Checking for updates...
	I0320 17:25:26.362619   15150 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:25:26.365559   15150 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:25:26.368576   15150 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:25:26.371589   15150 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:25:26.374512   15150 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:25:26.377843   15150 config.go:182] Loaded profile config "cert-expiration-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:25:26.377916   15150 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:25:26.377964   15150 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:25:26.382539   15150 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:25:26.389571   15150 start.go:297] selected driver: qemu2
	I0320 17:25:26.389578   15150 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:25:26.389584   15150 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:25:26.391823   15150 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:25:26.395548   15150 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:25:26.398668   15150 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:25:26.398707   15150 cni.go:84] Creating CNI manager for "kindnet"
	I0320 17:25:26.398711   15150 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0320 17:25:26.398744   15150 start.go:340] cluster config:
	{Name:kindnet-670000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kindnet-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:25:26.403169   15150 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:25:26.410564   15150 out.go:177] * Starting "kindnet-670000" primary control-plane node in "kindnet-670000" cluster
	I0320 17:25:26.414551   15150 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:25:26.414567   15150 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:25:26.414577   15150 cache.go:56] Caching tarball of preloaded images
	I0320 17:25:26.414646   15150 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:25:26.414658   15150 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:25:26.414714   15150 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/kindnet-670000/config.json ...
	I0320 17:25:26.414725   15150 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/kindnet-670000/config.json: {Name:mkc1937969fd70db85f7a4fdbd80907cfd1a4576 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:25:26.414938   15150 start.go:360] acquireMachinesLock for kindnet-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:25:26.414968   15150 start.go:364] duration metric: took 24.875µs to acquireMachinesLock for "kindnet-670000"
	I0320 17:25:26.414980   15150 start.go:93] Provisioning new machine with config: &{Name:kindnet-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kindnet-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:25:26.415014   15150 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:25:26.423531   15150 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:25:26.440676   15150 start.go:159] libmachine.API.Create for "kindnet-670000" (driver="qemu2")
	I0320 17:25:26.440700   15150 client.go:168] LocalClient.Create starting
	I0320 17:25:26.440755   15150 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:25:26.440785   15150 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:26.440799   15150 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:26.440843   15150 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:25:26.440865   15150 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:26.440873   15150 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:26.441285   15150 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:25:26.584408   15150 main.go:141] libmachine: Creating SSH key...
	I0320 17:25:26.634362   15150 main.go:141] libmachine: Creating Disk image...
	I0320 17:25:26.634368   15150 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:25:26.634572   15150 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/disk.qcow2
	I0320 17:25:26.647014   15150 main.go:141] libmachine: STDOUT: 
	I0320 17:25:26.647034   15150 main.go:141] libmachine: STDERR: 
	I0320 17:25:26.647077   15150 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/disk.qcow2 +20000M
	I0320 17:25:26.657583   15150 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:25:26.657601   15150 main.go:141] libmachine: STDERR: 
	I0320 17:25:26.657613   15150 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/disk.qcow2
	I0320 17:25:26.657618   15150 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:25:26.657659   15150 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=0e:6b:6f:b0:a7:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/disk.qcow2
	I0320 17:25:26.659303   15150 main.go:141] libmachine: STDOUT: 
	I0320 17:25:26.659318   15150 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:25:26.659346   15150 client.go:171] duration metric: took 218.649333ms to LocalClient.Create
	I0320 17:25:28.661619   15150 start.go:128] duration metric: took 2.24665125s to createHost
	I0320 17:25:28.661703   15150 start.go:83] releasing machines lock for "kindnet-670000", held for 2.24680875s
	W0320 17:25:28.661769   15150 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:25:28.676834   15150 out.go:177] * Deleting "kindnet-670000" in qemu2 ...
	W0320 17:25:28.703949   15150 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:25:28.703988   15150 start.go:728] Will try again in 5 seconds ...
	I0320 17:25:33.705964   15150 start.go:360] acquireMachinesLock for kindnet-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:25:33.706373   15150 start.go:364] duration metric: took 327.083µs to acquireMachinesLock for "kindnet-670000"
	I0320 17:25:33.706491   15150 start.go:93] Provisioning new machine with config: &{Name:kindnet-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kindnet-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:25:33.706769   15150 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:25:33.715271   15150 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:25:33.764250   15150 start.go:159] libmachine.API.Create for "kindnet-670000" (driver="qemu2")
	I0320 17:25:33.764297   15150 client.go:168] LocalClient.Create starting
	I0320 17:25:33.764406   15150 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:25:33.764468   15150 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:33.764488   15150 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:33.764555   15150 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:25:33.764595   15150 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:33.764609   15150 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:33.765141   15150 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:25:33.917777   15150 main.go:141] libmachine: Creating SSH key...
	I0320 17:25:34.035863   15150 main.go:141] libmachine: Creating Disk image...
	I0320 17:25:34.035869   15150 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:25:34.036088   15150 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/disk.qcow2
	I0320 17:25:34.048339   15150 main.go:141] libmachine: STDOUT: 
	I0320 17:25:34.048362   15150 main.go:141] libmachine: STDERR: 
	I0320 17:25:34.048434   15150 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/disk.qcow2 +20000M
	I0320 17:25:34.059118   15150 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:25:34.059143   15150 main.go:141] libmachine: STDERR: 
	I0320 17:25:34.059153   15150 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/disk.qcow2
	I0320 17:25:34.059157   15150 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:25:34.059190   15150 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:d5:2e:c7:2d:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kindnet-670000/disk.qcow2
	I0320 17:25:34.060904   15150 main.go:141] libmachine: STDOUT: 
	I0320 17:25:34.060921   15150 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:25:34.060933   15150 client.go:171] duration metric: took 296.642ms to LocalClient.Create
	I0320 17:25:36.063049   15150 start.go:128] duration metric: took 2.35633375s to createHost
	I0320 17:25:36.063113   15150 start.go:83] releasing machines lock for "kindnet-670000", held for 2.356806917s
	W0320 17:25:36.063563   15150 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kindnet-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kindnet-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:25:36.076170   15150 out.go:177] 
	W0320 17:25:36.080394   15150 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:25:36.080454   15150 out.go:239] * 
	* 
	W0320 17:25:36.082997   15150 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:25:36.092295   15150 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (9.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (9.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p flannel-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p flannel-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=qemu2 : exit status 80 (9.929988375s)

                                                
                                                
-- stdout --
	* [flannel-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "flannel-670000" primary control-plane node in "flannel-670000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "flannel-670000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:25:38.522382   15266 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:25:38.522528   15266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:25:38.522531   15266 out.go:304] Setting ErrFile to fd 2...
	I0320 17:25:38.522534   15266 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:25:38.522655   15266 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:25:38.523748   15266 out.go:298] Setting JSON to false
	I0320 17:25:38.540082   15266 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8708,"bootTime":1710972030,"procs":515,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:25:38.540156   15266 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:25:38.546684   15266 out.go:177] * [flannel-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:25:38.554703   15266 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:25:38.554725   15266 notify.go:220] Checking for updates...
	I0320 17:25:38.557562   15266 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:25:38.561649   15266 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:25:38.564676   15266 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:25:38.568576   15266 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:25:38.571678   15266 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:25:38.575066   15266 config.go:182] Loaded profile config "cert-expiration-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:25:38.575151   15266 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:25:38.575209   15266 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:25:38.579641   15266 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:25:38.586713   15266 start.go:297] selected driver: qemu2
	I0320 17:25:38.586720   15266 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:25:38.586727   15266 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:25:38.588992   15266 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:25:38.592677   15266 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:25:38.595755   15266 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:25:38.595800   15266 cni.go:84] Creating CNI manager for "flannel"
	I0320 17:25:38.595812   15266 start_flags.go:319] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0320 17:25:38.595840   15266 start.go:340] cluster config:
	{Name:flannel-670000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:flannel-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/sock
et_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:25:38.600529   15266 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:25:38.608685   15266 out.go:177] * Starting "flannel-670000" primary control-plane node in "flannel-670000" cluster
	I0320 17:25:38.612643   15266 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:25:38.612660   15266 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:25:38.612667   15266 cache.go:56] Caching tarball of preloaded images
	I0320 17:25:38.612725   15266 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:25:38.612731   15266 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:25:38.612789   15266 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/flannel-670000/config.json ...
	I0320 17:25:38.612801   15266 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/flannel-670000/config.json: {Name:mkc2d94240d88cc4f810de15df12a10401e03ff8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:25:38.613033   15266 start.go:360] acquireMachinesLock for flannel-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:25:38.613069   15266 start.go:364] duration metric: took 29.458µs to acquireMachinesLock for "flannel-670000"
	I0320 17:25:38.613083   15266 start.go:93] Provisioning new machine with config: &{Name:flannel-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:flannel-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:25:38.613126   15266 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:25:38.621685   15266 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:25:38.640530   15266 start.go:159] libmachine.API.Create for "flannel-670000" (driver="qemu2")
	I0320 17:25:38.640562   15266 client.go:168] LocalClient.Create starting
	I0320 17:25:38.640642   15266 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:25:38.640672   15266 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:38.640688   15266 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:38.640737   15266 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:25:38.640761   15266 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:38.640768   15266 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:38.641209   15266 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:25:38.783369   15266 main.go:141] libmachine: Creating SSH key...
	I0320 17:25:38.842017   15266 main.go:141] libmachine: Creating Disk image...
	I0320 17:25:38.842022   15266 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:25:38.842234   15266 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/disk.qcow2
	I0320 17:25:38.854366   15266 main.go:141] libmachine: STDOUT: 
	I0320 17:25:38.854386   15266 main.go:141] libmachine: STDERR: 
	I0320 17:25:38.854439   15266 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/disk.qcow2 +20000M
	I0320 17:25:38.864919   15266 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:25:38.864937   15266 main.go:141] libmachine: STDERR: 
	I0320 17:25:38.864953   15266 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/disk.qcow2
	I0320 17:25:38.864958   15266 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:25:38.864993   15266 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=da:2b:84:13:de:32 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/disk.qcow2
	I0320 17:25:38.866623   15266 main.go:141] libmachine: STDOUT: 
	I0320 17:25:38.866639   15266 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:25:38.866655   15266 client.go:171] duration metric: took 226.098375ms to LocalClient.Create
	I0320 17:25:40.867715   15266 start.go:128] duration metric: took 2.25464525s to createHost
	I0320 17:25:40.867777   15266 start.go:83] releasing machines lock for "flannel-670000", held for 2.254785584s
	W0320 17:25:40.867821   15266 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:25:40.885054   15266 out.go:177] * Deleting "flannel-670000" in qemu2 ...
	W0320 17:25:40.909194   15266 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:25:40.909227   15266 start.go:728] Will try again in 5 seconds ...
	I0320 17:25:45.911229   15266 start.go:360] acquireMachinesLock for flannel-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:25:45.911725   15266 start.go:364] duration metric: took 395.458µs to acquireMachinesLock for "flannel-670000"
	I0320 17:25:45.911865   15266 start.go:93] Provisioning new machine with config: &{Name:flannel-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:flannel-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:25:45.912174   15266 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:25:45.922861   15266 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:25:45.973084   15266 start.go:159] libmachine.API.Create for "flannel-670000" (driver="qemu2")
	I0320 17:25:45.973148   15266 client.go:168] LocalClient.Create starting
	I0320 17:25:45.973267   15266 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:25:45.973336   15266 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:45.973352   15266 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:45.973426   15266 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:25:45.973473   15266 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:45.973484   15266 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:45.974041   15266 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:25:46.223184   15266 main.go:141] libmachine: Creating SSH key...
	I0320 17:25:46.347145   15266 main.go:141] libmachine: Creating Disk image...
	I0320 17:25:46.347151   15266 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:25:46.347346   15266 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/disk.qcow2
	I0320 17:25:46.359619   15266 main.go:141] libmachine: STDOUT: 
	I0320 17:25:46.359639   15266 main.go:141] libmachine: STDERR: 
	I0320 17:25:46.359699   15266 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/disk.qcow2 +20000M
	I0320 17:25:46.370322   15266 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:25:46.370336   15266 main.go:141] libmachine: STDERR: 
	I0320 17:25:46.370351   15266 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/disk.qcow2
	I0320 17:25:46.370356   15266 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:25:46.370395   15266 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=42:8d:24:f1:bc:70 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/flannel-670000/disk.qcow2
	I0320 17:25:46.372128   15266 main.go:141] libmachine: STDOUT: 
	I0320 17:25:46.372143   15266 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:25:46.372160   15266 client.go:171] duration metric: took 399.017834ms to LocalClient.Create
	I0320 17:25:48.374264   15266 start.go:128] duration metric: took 2.462139875s to createHost
	I0320 17:25:48.374420   15266 start.go:83] releasing machines lock for "flannel-670000", held for 2.462676042s
	W0320 17:25:48.374818   15266 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p flannel-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p flannel-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:25:48.391378   15266 out.go:177] 
	W0320 17:25:48.394620   15266 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:25:48.394648   15266 out.go:239] * 
	* 
	W0320 17:25:48.397258   15266 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:25:48.406489   15266 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (9.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (9.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p enable-default-cni-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p enable-default-cni-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=qemu2 : exit status 80 (9.870023583s)

                                                
                                                
-- stdout --
	* [enable-default-cni-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "enable-default-cni-670000" primary control-plane node in "enable-default-cni-670000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "enable-default-cni-670000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:25:50.896803   15398 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:25:50.896948   15398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:25:50.896955   15398 out.go:304] Setting ErrFile to fd 2...
	I0320 17:25:50.896957   15398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:25:50.897089   15398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:25:50.898402   15398 out.go:298] Setting JSON to false
	I0320 17:25:50.914919   15398 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8720,"bootTime":1710972030,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:25:50.914986   15398 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:25:50.921830   15398 out.go:177] * [enable-default-cni-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:25:50.931774   15398 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:25:50.931776   15398 notify.go:220] Checking for updates...
	I0320 17:25:50.935819   15398 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:25:50.938755   15398 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:25:50.942801   15398 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:25:50.945890   15398 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:25:50.948822   15398 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:25:50.952215   15398 config.go:182] Loaded profile config "cert-expiration-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:25:50.952291   15398 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:25:50.952343   15398 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:25:50.955824   15398 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:25:50.962782   15398 start.go:297] selected driver: qemu2
	I0320 17:25:50.962789   15398 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:25:50.962797   15398 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:25:50.965078   15398 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:25:50.967837   15398 out.go:177] * Automatically selected the socket_vmnet network
	E0320 17:25:50.970880   15398 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0320 17:25:50.970894   15398 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:25:50.970940   15398 cni.go:84] Creating CNI manager for "bridge"
	I0320 17:25:50.970944   15398 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0320 17:25:50.970973   15398 start.go:340] cluster config:
	{Name:enable-default-cni-670000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/
socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:25:50.975649   15398 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:25:50.983850   15398 out.go:177] * Starting "enable-default-cni-670000" primary control-plane node in "enable-default-cni-670000" cluster
	I0320 17:25:50.987781   15398 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:25:50.987796   15398 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:25:50.987806   15398 cache.go:56] Caching tarball of preloaded images
	I0320 17:25:50.987864   15398 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:25:50.987870   15398 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:25:50.987943   15398 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/enable-default-cni-670000/config.json ...
	I0320 17:25:50.987954   15398 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/enable-default-cni-670000/config.json: {Name:mk08fc24a28a631b709c0784468fc14f01c4fc04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:25:50.988192   15398 start.go:360] acquireMachinesLock for enable-default-cni-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:25:50.988226   15398 start.go:364] duration metric: took 26.708µs to acquireMachinesLock for "enable-default-cni-670000"
	I0320 17:25:50.988239   15398 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:25:50.988280   15398 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:25:50.994781   15398 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:25:51.013163   15398 start.go:159] libmachine.API.Create for "enable-default-cni-670000" (driver="qemu2")
	I0320 17:25:51.013190   15398 client.go:168] LocalClient.Create starting
	I0320 17:25:51.013253   15398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:25:51.013283   15398 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:51.013298   15398 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:51.013340   15398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:25:51.013363   15398 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:51.013372   15398 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:51.013759   15398 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:25:51.157870   15398 main.go:141] libmachine: Creating SSH key...
	I0320 17:25:51.271156   15398 main.go:141] libmachine: Creating Disk image...
	I0320 17:25:51.271163   15398 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:25:51.271379   15398 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/disk.qcow2
	I0320 17:25:51.283493   15398 main.go:141] libmachine: STDOUT: 
	I0320 17:25:51.283520   15398 main.go:141] libmachine: STDERR: 
	I0320 17:25:51.283579   15398 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/disk.qcow2 +20000M
	I0320 17:25:51.294385   15398 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:25:51.294413   15398 main.go:141] libmachine: STDERR: 
	I0320 17:25:51.294432   15398 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/disk.qcow2
	I0320 17:25:51.294438   15398 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:25:51.294479   15398 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=fe:b6:ec:a1:92:99 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/disk.qcow2
	I0320 17:25:51.296247   15398 main.go:141] libmachine: STDOUT: 
	I0320 17:25:51.296262   15398 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:25:51.296281   15398 client.go:171] duration metric: took 283.097ms to LocalClient.Create
	I0320 17:25:53.298439   15398 start.go:128] duration metric: took 2.310225042s to createHost
	I0320 17:25:53.298533   15398 start.go:83] releasing machines lock for "enable-default-cni-670000", held for 2.310386583s
	W0320 17:25:53.298589   15398 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:25:53.311679   15398 out.go:177] * Deleting "enable-default-cni-670000" in qemu2 ...
	W0320 17:25:53.339982   15398 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:25:53.340013   15398 start.go:728] Will try again in 5 seconds ...
	I0320 17:25:58.342066   15398 start.go:360] acquireMachinesLock for enable-default-cni-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:25:58.342564   15398 start.go:364] duration metric: took 337.292µs to acquireMachinesLock for "enable-default-cni-670000"
	I0320 17:25:58.342703   15398 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:enable-default-cni-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:25:58.342967   15398 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:25:58.352628   15398 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:25:58.402615   15398 start.go:159] libmachine.API.Create for "enable-default-cni-670000" (driver="qemu2")
	I0320 17:25:58.402661   15398 client.go:168] LocalClient.Create starting
	I0320 17:25:58.402767   15398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:25:58.402828   15398 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:58.402844   15398 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:58.402901   15398 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:25:58.402946   15398 main.go:141] libmachine: Decoding PEM data...
	I0320 17:25:58.402956   15398 main.go:141] libmachine: Parsing certificate...
	I0320 17:25:58.403483   15398 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:25:58.556840   15398 main.go:141] libmachine: Creating SSH key...
	I0320 17:25:58.661473   15398 main.go:141] libmachine: Creating Disk image...
	I0320 17:25:58.661479   15398 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:25:58.661694   15398 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/disk.qcow2
	I0320 17:25:58.674359   15398 main.go:141] libmachine: STDOUT: 
	I0320 17:25:58.674378   15398 main.go:141] libmachine: STDERR: 
	I0320 17:25:58.674431   15398 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/disk.qcow2 +20000M
	I0320 17:25:58.684945   15398 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:25:58.684960   15398 main.go:141] libmachine: STDERR: 
	I0320 17:25:58.684976   15398 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/disk.qcow2
	I0320 17:25:58.684982   15398 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:25:58.685012   15398 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=86:80:1d:05:15:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/enable-default-cni-670000/disk.qcow2
	I0320 17:25:58.686706   15398 main.go:141] libmachine: STDOUT: 
	I0320 17:25:58.686722   15398 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:25:58.686733   15398 client.go:171] duration metric: took 284.076959ms to LocalClient.Create
	I0320 17:26:00.688879   15398 start.go:128] duration metric: took 2.345956792s to createHost
	I0320 17:26:00.689196   15398 start.go:83] releasing machines lock for "enable-default-cni-670000", held for 2.346473s
	W0320 17:26:00.689517   15398 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p enable-default-cni-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:26:00.704199   15398 out.go:177] 
	W0320 17:26:00.708349   15398 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:26:00.708376   15398 out.go:239] * 
	* 
	W0320 17:26:00.711198   15398 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:26:00.721299   15398 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (9.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (9.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p bridge-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p bridge-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=qemu2 : exit status 80 (9.808526416s)

                                                
                                                
-- stdout --
	* [bridge-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "bridge-670000" primary control-plane node in "bridge-670000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "bridge-670000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:26:03.016459   15511 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:26:03.016577   15511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:26:03.016581   15511 out.go:304] Setting ErrFile to fd 2...
	I0320 17:26:03.016583   15511 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:26:03.016700   15511 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:26:03.017769   15511 out.go:298] Setting JSON to false
	I0320 17:26:03.034031   15511 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8733,"bootTime":1710972030,"procs":525,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:26:03.034086   15511 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:26:03.040159   15511 out.go:177] * [bridge-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:26:03.048116   15511 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:26:03.048166   15511 notify.go:220] Checking for updates...
	I0320 17:26:03.053137   15511 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:26:03.056931   15511 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:26:03.060061   15511 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:26:03.064100   15511 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:26:03.065453   15511 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:26:03.068342   15511 config.go:182] Loaded profile config "cert-expiration-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:26:03.068405   15511 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:26:03.068453   15511 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:26:03.073093   15511 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:26:03.078010   15511 start.go:297] selected driver: qemu2
	I0320 17:26:03.078015   15511 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:26:03.078020   15511 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:26:03.080236   15511 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:26:03.083120   15511 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:26:03.086097   15511 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:26:03.086132   15511 cni.go:84] Creating CNI manager for "bridge"
	I0320 17:26:03.086136   15511 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0320 17:26:03.086160   15511 start.go:340] cluster config:
	{Name:bridge-670000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:bridge-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:26:03.090616   15511 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:26:03.098903   15511 out.go:177] * Starting "bridge-670000" primary control-plane node in "bridge-670000" cluster
	I0320 17:26:03.103064   15511 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:26:03.103080   15511 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:26:03.103092   15511 cache.go:56] Caching tarball of preloaded images
	I0320 17:26:03.103144   15511 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:26:03.103150   15511 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:26:03.103224   15511 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/bridge-670000/config.json ...
	I0320 17:26:03.103235   15511 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/bridge-670000/config.json: {Name:mk947f2b1712083dffbdb19abec36c9697bf07a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:26:03.103447   15511 start.go:360] acquireMachinesLock for bridge-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:26:03.103477   15511 start.go:364] duration metric: took 24.459µs to acquireMachinesLock for "bridge-670000"
	I0320 17:26:03.103490   15511 start.go:93] Provisioning new machine with config: &{Name:bridge-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:bridge-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:26:03.103530   15511 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:26:03.111076   15511 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:26:03.128093   15511 start.go:159] libmachine.API.Create for "bridge-670000" (driver="qemu2")
	I0320 17:26:03.128117   15511 client.go:168] LocalClient.Create starting
	I0320 17:26:03.128193   15511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:26:03.128220   15511 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:03.128230   15511 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:03.128272   15511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:26:03.128301   15511 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:03.128310   15511 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:03.128667   15511 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:26:03.279951   15511 main.go:141] libmachine: Creating SSH key...
	I0320 17:26:03.339000   15511 main.go:141] libmachine: Creating Disk image...
	I0320 17:26:03.339008   15511 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:26:03.339401   15511 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/disk.qcow2
	I0320 17:26:03.351543   15511 main.go:141] libmachine: STDOUT: 
	I0320 17:26:03.351567   15511 main.go:141] libmachine: STDERR: 
	I0320 17:26:03.351624   15511 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/disk.qcow2 +20000M
	I0320 17:26:03.362353   15511 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:26:03.362371   15511 main.go:141] libmachine: STDERR: 
	I0320 17:26:03.362390   15511 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/disk.qcow2
	I0320 17:26:03.362396   15511 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:26:03.362440   15511 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:41:ef:fa:ca:b4 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/disk.qcow2
	I0320 17:26:03.364080   15511 main.go:141] libmachine: STDOUT: 
	I0320 17:26:03.364094   15511 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:26:03.364114   15511 client.go:171] duration metric: took 236.00125ms to LocalClient.Create
	I0320 17:26:05.364785   15511 start.go:128] duration metric: took 2.261304708s to createHost
	I0320 17:26:05.364877   15511 start.go:83] releasing machines lock for "bridge-670000", held for 2.261477s
	W0320 17:26:05.364932   15511 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:26:05.380256   15511 out.go:177] * Deleting "bridge-670000" in qemu2 ...
	W0320 17:26:05.407850   15511 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:26:05.407885   15511 start.go:728] Will try again in 5 seconds ...
	I0320 17:26:10.409249   15511 start.go:360] acquireMachinesLock for bridge-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:26:10.409666   15511 start.go:364] duration metric: took 310.917µs to acquireMachinesLock for "bridge-670000"
	I0320 17:26:10.409789   15511 start.go:93] Provisioning new machine with config: &{Name:bridge-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:bridge-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:26:10.410022   15511 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:26:10.420737   15511 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:26:10.469425   15511 start.go:159] libmachine.API.Create for "bridge-670000" (driver="qemu2")
	I0320 17:26:10.469467   15511 client.go:168] LocalClient.Create starting
	I0320 17:26:10.469563   15511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:26:10.469617   15511 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:10.469634   15511 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:10.469690   15511 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:26:10.469732   15511 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:10.469743   15511 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:10.470266   15511 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:26:10.623299   15511 main.go:141] libmachine: Creating SSH key...
	I0320 17:26:10.720092   15511 main.go:141] libmachine: Creating Disk image...
	I0320 17:26:10.720097   15511 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:26:10.720294   15511 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/disk.qcow2
	I0320 17:26:10.732751   15511 main.go:141] libmachine: STDOUT: 
	I0320 17:26:10.732790   15511 main.go:141] libmachine: STDERR: 
	I0320 17:26:10.732840   15511 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/disk.qcow2 +20000M
	I0320 17:26:10.743411   15511 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:26:10.743430   15511 main.go:141] libmachine: STDERR: 
	I0320 17:26:10.743441   15511 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/disk.qcow2
	I0320 17:26:10.743446   15511 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:26:10.743481   15511 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9a:1f:75:5b:b3:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/bridge-670000/disk.qcow2
	I0320 17:26:10.745176   15511 main.go:141] libmachine: STDOUT: 
	I0320 17:26:10.745196   15511 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:26:10.745210   15511 client.go:171] duration metric: took 275.746584ms to LocalClient.Create
	I0320 17:26:12.747313   15511 start.go:128] duration metric: took 2.337352459s to createHost
	I0320 17:26:12.747374   15511 start.go:83] releasing machines lock for "bridge-670000", held for 2.337777333s
	W0320 17:26:12.747804   15511 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p bridge-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p bridge-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:26:12.763364   15511 out.go:177] 
	W0320 17:26:12.766385   15511 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:26:12.766475   15511 out.go:239] * 
	* 
	W0320 17:26:12.769327   15511 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:26:12.780367   15511 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (9.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (10.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p kubenet-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p kubenet-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=qemu2 : exit status 80 (10.077487416s)

                                                
                                                
-- stdout --
	* [kubenet-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "kubenet-670000" primary control-plane node in "kubenet-670000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "kubenet-670000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:26:15.083660   15621 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:26:15.083785   15621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:26:15.083789   15621 out.go:304] Setting ErrFile to fd 2...
	I0320 17:26:15.083791   15621 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:26:15.083919   15621 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:26:15.084981   15621 out.go:298] Setting JSON to false
	I0320 17:26:15.101259   15621 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8745,"bootTime":1710972030,"procs":522,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:26:15.101323   15621 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:26:15.107512   15621 out.go:177] * [kubenet-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:26:15.115471   15621 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:26:15.115525   15621 notify.go:220] Checking for updates...
	I0320 17:26:15.122291   15621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:26:15.126533   15621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:26:15.129559   15621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:26:15.131083   15621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:26:15.134508   15621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:26:15.137884   15621 config.go:182] Loaded profile config "cert-expiration-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:26:15.137954   15621 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:26:15.138004   15621 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:26:15.142348   15621 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:26:15.149489   15621 start.go:297] selected driver: qemu2
	I0320 17:26:15.149495   15621 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:26:15.149503   15621 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:26:15.151783   15621 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:26:15.154593   15621 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:26:15.157586   15621 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:26:15.157624   15621 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0320 17:26:15.157649   15621 start.go:340] cluster config:
	{Name:kubenet-670000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:kubenet-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntim
e:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:26:15.162103   15621 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:26:15.170456   15621 out.go:177] * Starting "kubenet-670000" primary control-plane node in "kubenet-670000" cluster
	I0320 17:26:15.174444   15621 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:26:15.174458   15621 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:26:15.174471   15621 cache.go:56] Caching tarball of preloaded images
	I0320 17:26:15.174533   15621 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:26:15.174539   15621 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:26:15.174603   15621 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/kubenet-670000/config.json ...
	I0320 17:26:15.174614   15621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/kubenet-670000/config.json: {Name:mk89d55987cbcc7195aefcbf65ed5141088e9516 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:26:15.174835   15621 start.go:360] acquireMachinesLock for kubenet-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:26:15.174871   15621 start.go:364] duration metric: took 30.792µs to acquireMachinesLock for "kubenet-670000"
	I0320 17:26:15.174885   15621 start.go:93] Provisioning new machine with config: &{Name:kubenet-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kubenet-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:26:15.174918   15621 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:26:15.183479   15621 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:26:15.201251   15621 start.go:159] libmachine.API.Create for "kubenet-670000" (driver="qemu2")
	I0320 17:26:15.201282   15621 client.go:168] LocalClient.Create starting
	I0320 17:26:15.201354   15621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:26:15.201385   15621 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:15.201396   15621 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:15.201444   15621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:26:15.201471   15621 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:15.201478   15621 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:15.201869   15621 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:26:15.345466   15621 main.go:141] libmachine: Creating SSH key...
	I0320 17:26:15.655021   15621 main.go:141] libmachine: Creating Disk image...
	I0320 17:26:15.655033   15621 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:26:15.655266   15621 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/disk.qcow2
	I0320 17:26:15.668300   15621 main.go:141] libmachine: STDOUT: 
	I0320 17:26:15.668322   15621 main.go:141] libmachine: STDERR: 
	I0320 17:26:15.668378   15621 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/disk.qcow2 +20000M
	I0320 17:26:15.679358   15621 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:26:15.679373   15621 main.go:141] libmachine: STDERR: 
	I0320 17:26:15.679395   15621 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/disk.qcow2
	I0320 17:26:15.679405   15621 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:26:15.679446   15621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:91:85:a3:d0:e8 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/disk.qcow2
	I0320 17:26:15.681152   15621 main.go:141] libmachine: STDOUT: 
	I0320 17:26:15.681166   15621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:26:15.681188   15621 client.go:171] duration metric: took 479.918584ms to LocalClient.Create
	I0320 17:26:17.683364   15621 start.go:128] duration metric: took 2.508506208s to createHost
	I0320 17:26:17.683452   15621 start.go:83] releasing machines lock for "kubenet-670000", held for 2.508662125s
	W0320 17:26:17.683525   15621 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:26:17.695711   15621 out.go:177] * Deleting "kubenet-670000" in qemu2 ...
	W0320 17:26:17.724821   15621 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:26:17.724848   15621 start.go:728] Will try again in 5 seconds ...
	I0320 17:26:22.726859   15621 start.go:360] acquireMachinesLock for kubenet-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:26:22.727340   15621 start.go:364] duration metric: took 384.792µs to acquireMachinesLock for "kubenet-670000"
	I0320 17:26:22.727464   15621 start.go:93] Provisioning new machine with config: &{Name:kubenet-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:kubenet-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:26:22.727780   15621 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:26:22.737267   15621 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:26:22.786082   15621 start.go:159] libmachine.API.Create for "kubenet-670000" (driver="qemu2")
	I0320 17:26:22.786123   15621 client.go:168] LocalClient.Create starting
	I0320 17:26:22.786228   15621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:26:22.786279   15621 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:22.786299   15621 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:22.786360   15621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:26:22.786401   15621 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:22.786414   15621 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:22.786938   15621 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:26:22.941793   15621 main.go:141] libmachine: Creating SSH key...
	I0320 17:26:23.062333   15621 main.go:141] libmachine: Creating Disk image...
	I0320 17:26:23.062339   15621 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:26:23.062545   15621 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/disk.qcow2
	I0320 17:26:23.075005   15621 main.go:141] libmachine: STDOUT: 
	I0320 17:26:23.075023   15621 main.go:141] libmachine: STDERR: 
	I0320 17:26:23.075092   15621 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/disk.qcow2 +20000M
	I0320 17:26:23.085876   15621 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:26:23.085892   15621 main.go:141] libmachine: STDERR: 
	I0320 17:26:23.085904   15621 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/disk.qcow2
	I0320 17:26:23.085910   15621 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:26:23.085947   15621 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=96:ee:d3:89:32:85 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/kubenet-670000/disk.qcow2
	I0320 17:26:23.087669   15621 main.go:141] libmachine: STDOUT: 
	I0320 17:26:23.087682   15621 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:26:23.087694   15621 client.go:171] duration metric: took 301.577584ms to LocalClient.Create
	I0320 17:26:25.087903   15621 start.go:128] duration metric: took 2.360159292s to createHost
	I0320 17:26:25.087963   15621 start.go:83] releasing machines lock for "kubenet-670000", held for 2.360692s
	W0320 17:26:25.088342   15621 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p kubenet-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p kubenet-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:26:25.099532   15621 out.go:177] 
	W0320 17:26:25.102641   15621 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:26:25.102671   15621 out.go:239] * 
	* 
	W0320 17:26:25.105285   15621 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:26:25.116628   15621 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (10.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (9.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p custom-flannel-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p custom-flannel-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=qemu2 : exit status 80 (9.966887667s)

                                                
                                                
-- stdout --
	* [custom-flannel-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "custom-flannel-670000" primary control-plane node in "custom-flannel-670000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "custom-flannel-670000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:26:27.426979   15733 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:26:27.427106   15733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:26:27.427109   15733 out.go:304] Setting ErrFile to fd 2...
	I0320 17:26:27.427111   15733 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:26:27.427252   15733 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:26:27.428322   15733 out.go:298] Setting JSON to false
	I0320 17:26:27.444622   15733 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8757,"bootTime":1710972030,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:26:27.444679   15733 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:26:27.451392   15733 out.go:177] * [custom-flannel-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:26:27.455261   15733 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:26:27.458323   15733 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:26:27.455299   15733 notify.go:220] Checking for updates...
	I0320 17:26:27.465212   15733 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:26:27.469305   15733 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:26:27.472197   15733 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:26:27.475287   15733 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:26:27.478587   15733 config.go:182] Loaded profile config "cert-expiration-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:26:27.478653   15733 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:26:27.478698   15733 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:26:27.482145   15733 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:26:27.489254   15733 start.go:297] selected driver: qemu2
	I0320 17:26:27.489258   15733 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:26:27.489263   15733 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:26:27.491550   15733 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:26:27.494221   15733 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:26:27.497326   15733 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:26:27.497370   15733 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0320 17:26:27.497385   15733 start_flags.go:319] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0320 17:26:27.497422   15733 start.go:340] cluster config:
	{Name:custom-flannel-670000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClie
ntPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:26:27.501917   15733 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:26:27.510212   15733 out.go:177] * Starting "custom-flannel-670000" primary control-plane node in "custom-flannel-670000" cluster
	I0320 17:26:27.513295   15733 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:26:27.513314   15733 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:26:27.513324   15733 cache.go:56] Caching tarball of preloaded images
	I0320 17:26:27.513390   15733 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:26:27.513397   15733 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:26:27.513462   15733 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/custom-flannel-670000/config.json ...
	I0320 17:26:27.513474   15733 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/custom-flannel-670000/config.json: {Name:mk10c3db17f0b81a764e2eb7846aa61e99d3fd64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:26:27.513717   15733 start.go:360] acquireMachinesLock for custom-flannel-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:26:27.513751   15733 start.go:364] duration metric: took 26.917µs to acquireMachinesLock for "custom-flannel-670000"
	I0320 17:26:27.513764   15733 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:26:27.513798   15733 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:26:27.522223   15733 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:26:27.540188   15733 start.go:159] libmachine.API.Create for "custom-flannel-670000" (driver="qemu2")
	I0320 17:26:27.540224   15733 client.go:168] LocalClient.Create starting
	I0320 17:26:27.540281   15733 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:26:27.540309   15733 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:27.540319   15733 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:27.540369   15733 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:26:27.540393   15733 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:27.540399   15733 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:27.540845   15733 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:26:27.685728   15733 main.go:141] libmachine: Creating SSH key...
	I0320 17:26:27.823965   15733 main.go:141] libmachine: Creating Disk image...
	I0320 17:26:27.823972   15733 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:26:27.824158   15733 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/disk.qcow2
	I0320 17:26:27.836291   15733 main.go:141] libmachine: STDOUT: 
	I0320 17:26:27.836313   15733 main.go:141] libmachine: STDERR: 
	I0320 17:26:27.836371   15733 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/disk.qcow2 +20000M
	I0320 17:26:27.847004   15733 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:26:27.847022   15733 main.go:141] libmachine: STDERR: 
	I0320 17:26:27.847037   15733 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/disk.qcow2
	I0320 17:26:27.847043   15733 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:26:27.847081   15733 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=d2:f5:e0:29:ec:3f -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/disk.qcow2
	I0320 17:26:27.848780   15733 main.go:141] libmachine: STDOUT: 
	I0320 17:26:27.848795   15733 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:26:27.848813   15733 client.go:171] duration metric: took 308.594709ms to LocalClient.Create
	I0320 17:26:29.850916   15733 start.go:128] duration metric: took 2.337190166s to createHost
	I0320 17:26:29.850978   15733 start.go:83] releasing machines lock for "custom-flannel-670000", held for 2.337305125s
	W0320 17:26:29.851058   15733 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:26:29.868241   15733 out.go:177] * Deleting "custom-flannel-670000" in qemu2 ...
	W0320 17:26:29.894031   15733 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:26:29.894058   15733 start.go:728] Will try again in 5 seconds ...
	I0320 17:26:34.896124   15733 start.go:360] acquireMachinesLock for custom-flannel-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:26:34.896645   15733 start.go:364] duration metric: took 338.417µs to acquireMachinesLock for "custom-flannel-670000"
	I0320 17:26:34.896793   15733 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.29.3 ClusterName:custom-flannel-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:26:34.897105   15733 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:26:34.908799   15733 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:26:34.957274   15733 start.go:159] libmachine.API.Create for "custom-flannel-670000" (driver="qemu2")
	I0320 17:26:34.957326   15733 client.go:168] LocalClient.Create starting
	I0320 17:26:34.957429   15733 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:26:34.957493   15733 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:34.957512   15733 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:34.957588   15733 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:26:34.957633   15733 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:34.957643   15733 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:34.958160   15733 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:26:35.147985   15733 main.go:141] libmachine: Creating SSH key...
	I0320 17:26:35.289688   15733 main.go:141] libmachine: Creating Disk image...
	I0320 17:26:35.289694   15733 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:26:35.289907   15733 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/disk.qcow2
	I0320 17:26:35.302583   15733 main.go:141] libmachine: STDOUT: 
	I0320 17:26:35.302680   15733 main.go:141] libmachine: STDERR: 
	I0320 17:26:35.302728   15733 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/disk.qcow2 +20000M
	I0320 17:26:35.313336   15733 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:26:35.313413   15733 main.go:141] libmachine: STDERR: 
	I0320 17:26:35.313428   15733 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/disk.qcow2
	I0320 17:26:35.313443   15733 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:26:35.313477   15733 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:15:00:7b:87:db -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/custom-flannel-670000/disk.qcow2
	I0320 17:26:35.315215   15733 main.go:141] libmachine: STDOUT: 
	I0320 17:26:35.315343   15733 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:26:35.315358   15733 client.go:171] duration metric: took 358.039166ms to LocalClient.Create
	I0320 17:26:37.317458   15733 start.go:128] duration metric: took 2.420418417s to createHost
	I0320 17:26:37.317503   15733 start.go:83] releasing machines lock for "custom-flannel-670000", held for 2.420925042s
	W0320 17:26:37.317836   15733 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p custom-flannel-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:26:37.332566   15733 out.go:177] 
	W0320 17:26:37.334564   15733 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:26:37.334596   15733 out.go:239] * 
	* 
	W0320 17:26:37.337127   15733 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:26:37.348398   15733 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (9.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (9.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p calico-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p calico-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=qemu2 : exit status 80 (9.846276042s)

                                                
                                                
-- stdout --
	* [calico-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "calico-670000" primary control-plane node in "calico-670000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "calico-670000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:26:39.837474   15857 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:26:39.837610   15857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:26:39.837613   15857 out.go:304] Setting ErrFile to fd 2...
	I0320 17:26:39.837616   15857 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:26:39.837774   15857 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:26:39.838829   15857 out.go:298] Setting JSON to false
	I0320 17:26:39.855044   15857 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8769,"bootTime":1710972030,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:26:39.855110   15857 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:26:39.860740   15857 out.go:177] * [calico-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:26:39.868733   15857 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:26:39.868795   15857 notify.go:220] Checking for updates...
	I0320 17:26:39.877668   15857 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:26:39.880722   15857 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:26:39.884718   15857 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:26:39.887712   15857 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:26:39.890666   15857 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:26:39.894102   15857 config.go:182] Loaded profile config "cert-expiration-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:26:39.894176   15857 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:26:39.894222   15857 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:26:39.898729   15857 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:26:39.905661   15857 start.go:297] selected driver: qemu2
	I0320 17:26:39.905668   15857 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:26:39.905676   15857 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:26:39.907963   15857 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:26:39.911679   15857 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:26:39.914772   15857 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:26:39.914813   15857 cni.go:84] Creating CNI manager for "calico"
	I0320 17:26:39.914818   15857 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I0320 17:26:39.914847   15857 start.go:340] cluster config:
	{Name:calico-670000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:calico-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:26:39.919572   15857 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:26:39.926606   15857 out.go:177] * Starting "calico-670000" primary control-plane node in "calico-670000" cluster
	I0320 17:26:39.930684   15857 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:26:39.930700   15857 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:26:39.930715   15857 cache.go:56] Caching tarball of preloaded images
	I0320 17:26:39.930789   15857 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:26:39.930795   15857 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:26:39.930859   15857 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/calico-670000/config.json ...
	I0320 17:26:39.930873   15857 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/calico-670000/config.json: {Name:mk67a2cdbf221c939db0964dde593b1aea1d66ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:26:39.931099   15857 start.go:360] acquireMachinesLock for calico-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:26:39.931132   15857 start.go:364] duration metric: took 27.041µs to acquireMachinesLock for "calico-670000"
	I0320 17:26:39.931145   15857 start.go:93] Provisioning new machine with config: &{Name:calico-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:calico-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:26:39.931179   15857 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:26:39.939668   15857 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:26:39.957065   15857 start.go:159] libmachine.API.Create for "calico-670000" (driver="qemu2")
	I0320 17:26:39.957096   15857 client.go:168] LocalClient.Create starting
	I0320 17:26:39.957160   15857 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:26:39.957192   15857 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:39.957202   15857 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:39.957261   15857 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:26:39.957285   15857 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:39.957293   15857 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:39.957694   15857 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:26:40.133249   15857 main.go:141] libmachine: Creating SSH key...
	I0320 17:26:40.188784   15857 main.go:141] libmachine: Creating Disk image...
	I0320 17:26:40.188800   15857 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:26:40.189011   15857 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/disk.qcow2
	I0320 17:26:40.201187   15857 main.go:141] libmachine: STDOUT: 
	I0320 17:26:40.201208   15857 main.go:141] libmachine: STDERR: 
	I0320 17:26:40.201260   15857 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/disk.qcow2 +20000M
	I0320 17:26:40.212111   15857 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:26:40.212131   15857 main.go:141] libmachine: STDERR: 
	I0320 17:26:40.212149   15857 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/disk.qcow2
	I0320 17:26:40.212159   15857 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:26:40.212209   15857 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=ba:03:fe:96:1d:58 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/disk.qcow2
	I0320 17:26:40.214017   15857 main.go:141] libmachine: STDOUT: 
	I0320 17:26:40.214038   15857 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:26:40.214061   15857 client.go:171] duration metric: took 256.969583ms to LocalClient.Create
	I0320 17:26:42.216192   15857 start.go:128] duration metric: took 2.285070625s to createHost
	I0320 17:26:42.216299   15857 start.go:83] releasing machines lock for "calico-670000", held for 2.285245959s
	W0320 17:26:42.216384   15857 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:26:42.231521   15857 out.go:177] * Deleting "calico-670000" in qemu2 ...
	W0320 17:26:42.256516   15857 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:26:42.256542   15857 start.go:728] Will try again in 5 seconds ...
	I0320 17:26:47.258501   15857 start.go:360] acquireMachinesLock for calico-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:26:47.258862   15857 start.go:364] duration metric: took 280.042µs to acquireMachinesLock for "calico-670000"
	I0320 17:26:47.258979   15857 start.go:93] Provisioning new machine with config: &{Name:calico-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kube
rnetesVersion:v1.29.3 ClusterName:calico-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:26:47.259435   15857 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:26:47.276051   15857 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:26:47.326311   15857 start.go:159] libmachine.API.Create for "calico-670000" (driver="qemu2")
	I0320 17:26:47.326355   15857 client.go:168] LocalClient.Create starting
	I0320 17:26:47.326457   15857 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:26:47.326512   15857 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:47.326529   15857 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:47.326602   15857 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:26:47.326643   15857 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:47.326657   15857 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:47.327175   15857 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:26:47.480799   15857 main.go:141] libmachine: Creating SSH key...
	I0320 17:26:47.579348   15857 main.go:141] libmachine: Creating Disk image...
	I0320 17:26:47.579354   15857 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:26:47.579550   15857 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/disk.qcow2
	I0320 17:26:47.591729   15857 main.go:141] libmachine: STDOUT: 
	I0320 17:26:47.591750   15857 main.go:141] libmachine: STDERR: 
	I0320 17:26:47.591804   15857 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/disk.qcow2 +20000M
	I0320 17:26:47.602458   15857 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:26:47.602473   15857 main.go:141] libmachine: STDERR: 
	I0320 17:26:47.602493   15857 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/disk.qcow2
	I0320 17:26:47.602498   15857 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:26:47.602528   15857 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=06:7d:71:f2:50:e5 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/calico-670000/disk.qcow2
	I0320 17:26:47.604206   15857 main.go:141] libmachine: STDOUT: 
	I0320 17:26:47.604221   15857 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:26:47.604232   15857 client.go:171] duration metric: took 277.881625ms to LocalClient.Create
	I0320 17:26:49.606326   15857 start.go:128] duration metric: took 2.346931041s to createHost
	I0320 17:26:49.606383   15857 start.go:83] releasing machines lock for "calico-670000", held for 2.347590584s
	W0320 17:26:49.606752   15857 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p calico-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p calico-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:26:49.620504   15857 out.go:177] 
	W0320 17:26:49.624472   15857 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:26:49.624549   15857 out.go:239] * 
	* 
	W0320 17:26:49.627115   15857 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:26:49.639359   15857 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (9.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (9.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-arm64 start -p false-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p false-670000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=qemu2 : exit status 80 (9.8023395s)

                                                
                                                
-- stdout --
	* [false-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "false-670000" primary control-plane node in "false-670000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "false-670000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:26:52.162607   15975 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:26:52.162727   15975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:26:52.162731   15975 out.go:304] Setting ErrFile to fd 2...
	I0320 17:26:52.162734   15975 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:26:52.162872   15975 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:26:52.163949   15975 out.go:298] Setting JSON to false
	I0320 17:26:52.180467   15975 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8782,"bootTime":1710972030,"procs":512,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:26:52.180525   15975 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:26:52.186576   15975 out.go:177] * [false-670000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:26:52.198737   15975 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:26:52.198773   15975 notify.go:220] Checking for updates...
	I0320 17:26:52.206666   15975 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:26:52.209691   15975 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:26:52.212656   15975 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:26:52.215686   15975 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:26:52.218681   15975 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:26:52.221917   15975 config.go:182] Loaded profile config "cert-expiration-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:26:52.221987   15975 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:26:52.222040   15975 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:26:52.226655   15975 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:26:52.233626   15975 start.go:297] selected driver: qemu2
	I0320 17:26:52.233632   15975 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:26:52.233638   15975 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:26:52.236057   15975 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:26:52.239624   15975 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:26:52.242790   15975 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:26:52.242833   15975 cni.go:84] Creating CNI manager for "false"
	I0320 17:26:52.242868   15975 start.go:340] cluster config:
	{Name:false-670000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:false-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_
client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:26:52.247819   15975 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:26:52.254663   15975 out.go:177] * Starting "false-670000" primary control-plane node in "false-670000" cluster
	I0320 17:26:52.258680   15975 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:26:52.258697   15975 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:26:52.258707   15975 cache.go:56] Caching tarball of preloaded images
	I0320 17:26:52.258763   15975 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:26:52.258769   15975 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:26:52.258836   15975 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/false-670000/config.json ...
	I0320 17:26:52.258848   15975 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/false-670000/config.json: {Name:mk665e77aa820c1843f98e15cccd7d3613fc5253 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:26:52.259080   15975 start.go:360] acquireMachinesLock for false-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:26:52.259113   15975 start.go:364] duration metric: took 27.083µs to acquireMachinesLock for "false-670000"
	I0320 17:26:52.259129   15975 start.go:93] Provisioning new machine with config: &{Name:false-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:false-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:26:52.259168   15975 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:26:52.265621   15975 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:26:52.284419   15975 start.go:159] libmachine.API.Create for "false-670000" (driver="qemu2")
	I0320 17:26:52.284457   15975 client.go:168] LocalClient.Create starting
	I0320 17:26:52.284528   15975 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:26:52.284569   15975 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:52.284579   15975 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:52.284631   15975 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:26:52.284654   15975 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:52.284664   15975 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:52.285079   15975 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:26:52.428403   15975 main.go:141] libmachine: Creating SSH key...
	I0320 17:26:52.501441   15975 main.go:141] libmachine: Creating Disk image...
	I0320 17:26:52.501446   15975 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:26:52.501638   15975 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/disk.qcow2
	I0320 17:26:52.513655   15975 main.go:141] libmachine: STDOUT: 
	I0320 17:26:52.513674   15975 main.go:141] libmachine: STDERR: 
	I0320 17:26:52.513727   15975 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/disk.qcow2 +20000M
	I0320 17:26:52.524934   15975 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:26:52.524952   15975 main.go:141] libmachine: STDERR: 
	I0320 17:26:52.524968   15975 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/disk.qcow2
	I0320 17:26:52.524975   15975 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:26:52.525002   15975 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=de:84:16:02:6c:4c -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/disk.qcow2
	I0320 17:26:52.526975   15975 main.go:141] libmachine: STDOUT: 
	I0320 17:26:52.526990   15975 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:26:52.527007   15975 client.go:171] duration metric: took 242.553292ms to LocalClient.Create
	I0320 17:26:54.529179   15975 start.go:128] duration metric: took 2.27006875s to createHost
	I0320 17:26:54.529284   15975 start.go:83] releasing machines lock for "false-670000", held for 2.270207458s
	W0320 17:26:54.529366   15975 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:26:54.544476   15975 out.go:177] * Deleting "false-670000" in qemu2 ...
	W0320 17:26:54.570300   15975 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:26:54.570325   15975 start.go:728] Will try again in 5 seconds ...
	I0320 17:26:59.572313   15975 start.go:360] acquireMachinesLock for false-670000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:26:59.572773   15975 start.go:364] duration metric: took 374.833µs to acquireMachinesLock for "false-670000"
	I0320 17:26:59.572931   15975 start.go:93] Provisioning new machine with config: &{Name:false-670000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.3 ClusterName:false-670000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:26:59.573224   15975 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:26:59.583786   15975 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0320 17:26:59.632875   15975 start.go:159] libmachine.API.Create for "false-670000" (driver="qemu2")
	I0320 17:26:59.632923   15975 client.go:168] LocalClient.Create starting
	I0320 17:26:59.633029   15975 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:26:59.633087   15975 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:59.633105   15975 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:59.633173   15975 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:26:59.633213   15975 main.go:141] libmachine: Decoding PEM data...
	I0320 17:26:59.633228   15975 main.go:141] libmachine: Parsing certificate...
	I0320 17:26:59.633807   15975 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:26:59.786159   15975 main.go:141] libmachine: Creating SSH key...
	I0320 17:26:59.863051   15975 main.go:141] libmachine: Creating Disk image...
	I0320 17:26:59.863056   15975 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:26:59.863250   15975 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/disk.qcow2
	I0320 17:26:59.875760   15975 main.go:141] libmachine: STDOUT: 
	I0320 17:26:59.875785   15975 main.go:141] libmachine: STDERR: 
	I0320 17:26:59.875847   15975 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/disk.qcow2 +20000M
	I0320 17:26:59.886418   15975 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:26:59.886442   15975 main.go:141] libmachine: STDERR: 
	I0320 17:26:59.886457   15975 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/disk.qcow2
	I0320 17:26:59.886462   15975 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:26:59.886496   15975 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 3072 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/qemu.pid -device virtio-net-pci,netdev=net0,mac=26:67:83:2e:e7:e3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/false-670000/disk.qcow2
	I0320 17:26:59.888277   15975 main.go:141] libmachine: STDOUT: 
	I0320 17:26:59.888297   15975 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:26:59.888310   15975 client.go:171] duration metric: took 255.390166ms to LocalClient.Create
	I0320 17:27:01.890409   15975 start.go:128] duration metric: took 2.317246959s to createHost
	I0320 17:27:01.890455   15975 start.go:83] releasing machines lock for "false-670000", held for 2.317748s
	W0320 17:27:01.890868   15975 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p false-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p false-670000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:27:01.903345   15975 out.go:177] 
	W0320 17:27:01.907617   15975 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:27:01.907642   15975 out.go:239] * 
	* 
	W0320 17:27:01.909912   15975 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:27:01.919363   15975 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (9.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (9.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-360000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-360000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (9.784135833s)

                                                
                                                
-- stdout --
	* [old-k8s-version-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "old-k8s-version-360000" primary control-plane node in "old-k8s-version-360000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "old-k8s-version-360000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:27:04.215506   16088 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:27:04.215628   16088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:04.215632   16088 out.go:304] Setting ErrFile to fd 2...
	I0320 17:27:04.215634   16088 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:04.215753   16088 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:27:04.216846   16088 out.go:298] Setting JSON to false
	I0320 17:27:04.233042   16088 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8794,"bootTime":1710972030,"procs":513,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:27:04.233106   16088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:27:04.239292   16088 out.go:177] * [old-k8s-version-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:27:04.247253   16088 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:27:04.252194   16088 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:27:04.247306   16088 notify.go:220] Checking for updates...
	I0320 17:27:04.259183   16088 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:27:04.263166   16088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:27:04.266220   16088 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:27:04.269228   16088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:27:04.272496   16088 config.go:182] Loaded profile config "cert-expiration-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:27:04.272565   16088 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:27:04.272631   16088 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:27:04.277188   16088 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:27:04.284265   16088 start.go:297] selected driver: qemu2
	I0320 17:27:04.284280   16088 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:27:04.284288   16088 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:27:04.286828   16088 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:27:04.290190   16088 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:27:04.293397   16088 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:27:04.293440   16088 cni.go:84] Creating CNI manager for ""
	I0320 17:27:04.293447   16088 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0320 17:27:04.293483   16088 start.go:340] cluster config:
	{Name:old-k8s-version-360000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/
socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:27:04.298162   16088 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:04.306226   16088 out.go:177] * Starting "old-k8s-version-360000" primary control-plane node in "old-k8s-version-360000" cluster
	I0320 17:27:04.309166   16088 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0320 17:27:04.309192   16088 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0320 17:27:04.309205   16088 cache.go:56] Caching tarball of preloaded images
	I0320 17:27:04.309282   16088 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:27:04.309288   16088 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0320 17:27:04.309356   16088 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/old-k8s-version-360000/config.json ...
	I0320 17:27:04.309368   16088 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/old-k8s-version-360000/config.json: {Name:mkd5be056ac5c01b98baee0667363ecdf9925a6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:27:04.309622   16088 start.go:360] acquireMachinesLock for old-k8s-version-360000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:27:04.309661   16088 start.go:364] duration metric: took 30.875µs to acquireMachinesLock for "old-k8s-version-360000"
	I0320 17:27:04.309677   16088 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:27:04.309705   16088 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:27:04.317169   16088 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:27:04.335525   16088 start.go:159] libmachine.API.Create for "old-k8s-version-360000" (driver="qemu2")
	I0320 17:27:04.335559   16088 client.go:168] LocalClient.Create starting
	I0320 17:27:04.335624   16088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:27:04.335654   16088 main.go:141] libmachine: Decoding PEM data...
	I0320 17:27:04.335666   16088 main.go:141] libmachine: Parsing certificate...
	I0320 17:27:04.335720   16088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:27:04.335744   16088 main.go:141] libmachine: Decoding PEM data...
	I0320 17:27:04.335753   16088 main.go:141] libmachine: Parsing certificate...
	I0320 17:27:04.336180   16088 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:27:04.477421   16088 main.go:141] libmachine: Creating SSH key...
	I0320 17:27:04.562260   16088 main.go:141] libmachine: Creating Disk image...
	I0320 17:27:04.562270   16088 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:27:04.562472   16088 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/disk.qcow2
	I0320 17:27:04.574715   16088 main.go:141] libmachine: STDOUT: 
	I0320 17:27:04.574731   16088 main.go:141] libmachine: STDERR: 
	I0320 17:27:04.574777   16088 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/disk.qcow2 +20000M
	I0320 17:27:04.585905   16088 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:27:04.585922   16088 main.go:141] libmachine: STDERR: 
	I0320 17:27:04.585936   16088 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/disk.qcow2
	I0320 17:27:04.585940   16088 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:27:04.585976   16088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=36:c4:48:9a:31:c3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/disk.qcow2
	I0320 17:27:04.587716   16088 main.go:141] libmachine: STDOUT: 
	I0320 17:27:04.587733   16088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:27:04.587759   16088 client.go:171] duration metric: took 252.204292ms to LocalClient.Create
	I0320 17:27:06.589880   16088 start.go:128] duration metric: took 2.280214792s to createHost
	I0320 17:27:06.589973   16088 start.go:83] releasing machines lock for "old-k8s-version-360000", held for 2.28037s
	W0320 17:27:06.590044   16088 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:27:06.604153   16088 out.go:177] * Deleting "old-k8s-version-360000" in qemu2 ...
	W0320 17:27:06.630517   16088 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:27:06.630543   16088 start.go:728] Will try again in 5 seconds ...
	I0320 17:27:11.630518   16088 start.go:360] acquireMachinesLock for old-k8s-version-360000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:27:11.630914   16088 start.go:364] duration metric: took 307.5µs to acquireMachinesLock for "old-k8s-version-360000"
	I0320 17:27:11.631038   16088 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:27:11.631339   16088 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:27:11.641861   16088 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:27:11.691651   16088 start.go:159] libmachine.API.Create for "old-k8s-version-360000" (driver="qemu2")
	I0320 17:27:11.691692   16088 client.go:168] LocalClient.Create starting
	I0320 17:27:11.691802   16088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:27:11.691875   16088 main.go:141] libmachine: Decoding PEM data...
	I0320 17:27:11.691899   16088 main.go:141] libmachine: Parsing certificate...
	I0320 17:27:11.691956   16088 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:27:11.691998   16088 main.go:141] libmachine: Decoding PEM data...
	I0320 17:27:11.692009   16088 main.go:141] libmachine: Parsing certificate...
	I0320 17:27:11.692662   16088 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:27:11.849293   16088 main.go:141] libmachine: Creating SSH key...
	I0320 17:27:11.897095   16088 main.go:141] libmachine: Creating Disk image...
	I0320 17:27:11.897100   16088 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:27:11.897306   16088 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/disk.qcow2
	I0320 17:27:11.910016   16088 main.go:141] libmachine: STDOUT: 
	I0320 17:27:11.910038   16088 main.go:141] libmachine: STDERR: 
	I0320 17:27:11.910103   16088 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/disk.qcow2 +20000M
	I0320 17:27:11.920732   16088 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:27:11.920755   16088 main.go:141] libmachine: STDERR: 
	I0320 17:27:11.920767   16088 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/disk.qcow2
	I0320 17:27:11.920771   16088 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:27:11.920795   16088 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:fc:7b:ae:2a:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/disk.qcow2
	I0320 17:27:11.922576   16088 main.go:141] libmachine: STDOUT: 
	I0320 17:27:11.922601   16088 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:27:11.922615   16088 client.go:171] duration metric: took 230.925792ms to LocalClient.Create
	I0320 17:27:13.924770   16088 start.go:128] duration metric: took 2.293471583s to createHost
	I0320 17:27:13.924917   16088 start.go:83] releasing machines lock for "old-k8s-version-360000", held for 2.293988208s
	W0320 17:27:13.925353   16088 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-360000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:27:13.940048   16088 out.go:177] 
	W0320 17:27:13.941677   16088 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:27:13.941700   16088 out.go:239] * 
	* 
	W0320 17:27:13.944405   16088 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:27:13.955950   16088 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p old-k8s-version-360000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000: exit status 7 (72.81325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (9.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-360000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-360000 create -f testdata/busybox.yaml: exit status 1 (29.644ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-360000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-360000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000: exit status 7 (31.751208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-360000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000: exit status 7 (31.39925ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p old-k8s-version-360000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-360000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-360000 describe deploy/metrics-server -n kube-system: exit status 1 (27.106458ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-360000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-360000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000: exit status 7 (31.740666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p old-k8s-version-360000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p old-k8s-version-360000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0: exit status 80 (5.197036542s)

                                                
                                                
-- stdout --
	* [old-k8s-version-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	* Using the qemu2 driver based on existing profile
	* Starting "old-k8s-version-360000" primary control-plane node in "old-k8s-version-360000" cluster
	* Restarting existing qemu2 VM for "old-k8s-version-360000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "old-k8s-version-360000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:27:17.342201   16139 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:27:17.342335   16139 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:17.342338   16139 out.go:304] Setting ErrFile to fd 2...
	I0320 17:27:17.342341   16139 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:17.342463   16139 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:27:17.343499   16139 out.go:298] Setting JSON to false
	I0320 17:27:17.359659   16139 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8807,"bootTime":1710972030,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:27:17.359713   16139 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:27:17.364571   16139 out.go:177] * [old-k8s-version-360000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:27:17.371510   16139 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:27:17.375517   16139 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:27:17.371554   16139 notify.go:220] Checking for updates...
	I0320 17:27:17.383504   16139 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:27:17.385055   16139 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:27:17.388542   16139 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:27:17.391478   16139 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:27:17.394766   16139 config.go:182] Loaded profile config "old-k8s-version-360000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0320 17:27:17.398507   16139 out.go:177] * Kubernetes 1.29.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.29.3
	I0320 17:27:17.401530   16139 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:27:17.405475   16139 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:27:17.412485   16139 start.go:297] selected driver: qemu2
	I0320 17:27:17.412490   16139 start.go:901] validating driver "qemu2" against &{Name:old-k8s-version-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:
0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:27:17.412544   16139 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:27:17.414876   16139 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:27:17.414922   16139 cni.go:84] Creating CNI manager for ""
	I0320 17:27:17.414930   16139 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0320 17:27:17.414957   16139 start.go:340] cluster config:
	{Name:old-k8s-version-360000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-360000 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:27:17.419363   16139 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:17.427491   16139 out.go:177] * Starting "old-k8s-version-360000" primary control-plane node in "old-k8s-version-360000" cluster
	I0320 17:27:17.431535   16139 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0320 17:27:17.431550   16139 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0320 17:27:17.431565   16139 cache.go:56] Caching tarball of preloaded images
	I0320 17:27:17.431628   16139 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:27:17.431638   16139 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0320 17:27:17.431722   16139 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/old-k8s-version-360000/config.json ...
	I0320 17:27:17.432216   16139 start.go:360] acquireMachinesLock for old-k8s-version-360000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:27:17.432246   16139 start.go:364] duration metric: took 23.083µs to acquireMachinesLock for "old-k8s-version-360000"
	I0320 17:27:17.432256   16139 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:27:17.432261   16139 fix.go:54] fixHost starting: 
	I0320 17:27:17.432380   16139 fix.go:112] recreateIfNeeded on old-k8s-version-360000: state=Stopped err=<nil>
	W0320 17:27:17.432391   16139 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:27:17.436530   16139 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-360000" ...
	I0320 17:27:17.443534   16139 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:fc:7b:ae:2a:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/disk.qcow2
	I0320 17:27:17.445771   16139 main.go:141] libmachine: STDOUT: 
	I0320 17:27:17.445790   16139 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:27:17.445820   16139 fix.go:56] duration metric: took 13.558291ms for fixHost
	I0320 17:27:17.445826   16139 start.go:83] releasing machines lock for "old-k8s-version-360000", held for 13.57625ms
	W0320 17:27:17.445835   16139 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:27:17.445875   16139 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:27:17.445881   16139 start.go:728] Will try again in 5 seconds ...
	I0320 17:27:22.447871   16139 start.go:360] acquireMachinesLock for old-k8s-version-360000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:27:22.448165   16139 start.go:364] duration metric: took 219.958µs to acquireMachinesLock for "old-k8s-version-360000"
	I0320 17:27:22.448282   16139 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:27:22.448299   16139 fix.go:54] fixHost starting: 
	I0320 17:27:22.448968   16139 fix.go:112] recreateIfNeeded on old-k8s-version-360000: state=Stopped err=<nil>
	W0320 17:27:22.448995   16139 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:27:22.458393   16139 out.go:177] * Restarting existing qemu2 VM for "old-k8s-version-360000" ...
	I0320 17:27:22.461607   16139 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:fc:7b:ae:2a:18 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/old-k8s-version-360000/disk.qcow2
	I0320 17:27:22.471172   16139 main.go:141] libmachine: STDOUT: 
	I0320 17:27:22.471258   16139 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:27:22.471326   16139 fix.go:56] duration metric: took 23.029625ms for fixHost
	I0320 17:27:22.471351   16139 start.go:83] releasing machines lock for "old-k8s-version-360000", held for 23.162084ms
	W0320 17:27:22.471533   16139 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-360000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p old-k8s-version-360000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:27:22.479404   16139 out.go:177] 
	W0320 17:27:22.482463   16139 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:27:22.482500   16139 out.go:239] * 
	* 
	W0320 17:27:22.484948   16139 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:27:22.494377   16139 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p old-k8s-version-360000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=qemu2  --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000: exit status 7 (69.486542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-360000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000: exit status 7 (33.85525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-360000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-360000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-360000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.51875ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-360000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-360000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000: exit status 7 (31.981041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p old-k8s-version-360000 image list --format=json
start_stop_delete_test.go:304: v1.20.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.7.0",
- 	"k8s.gcr.io/etcd:3.4.13-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.20.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.20.0",
- 	"k8s.gcr.io/kube-proxy:v1.20.0",
- 	"k8s.gcr.io/kube-scheduler:v1.20.0",
- 	"k8s.gcr.io/pause:3.2",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000: exit status 7 (31.748625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p old-k8s-version-360000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p old-k8s-version-360000 --alsologtostderr -v=1: exit status 83 (44.67275ms)

                                                
                                                
-- stdout --
	* The control-plane node old-k8s-version-360000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p old-k8s-version-360000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:27:22.776720   16158 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:27:22.777136   16158 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:22.777139   16158 out.go:304] Setting ErrFile to fd 2...
	I0320 17:27:22.777142   16158 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:22.777315   16158 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:27:22.777521   16158 out.go:298] Setting JSON to false
	I0320 17:27:22.777530   16158 mustload.go:65] Loading cluster: old-k8s-version-360000
	I0320 17:27:22.777716   16158 config.go:182] Loaded profile config "old-k8s-version-360000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0320 17:27:22.782373   16158 out.go:177] * The control-plane node old-k8s-version-360000 host is not running: state=Stopped
	I0320 17:27:22.786449   16158 out.go:177]   To start a cluster, run: "minikube start -p old-k8s-version-360000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p old-k8s-version-360000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000: exit status 7 (31.451375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-360000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000: exit status 7 (31.280958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-360000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (9.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-472000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-472000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (9.907212917s)

                                                
                                                
-- stdout --
	* [no-preload-472000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "no-preload-472000" primary control-plane node in "no-preload-472000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "no-preload-472000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:27:23.260203   16181 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:27:23.260322   16181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:23.260324   16181 out.go:304] Setting ErrFile to fd 2...
	I0320 17:27:23.260327   16181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:23.260442   16181 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:27:23.261533   16181 out.go:298] Setting JSON to false
	I0320 17:27:23.277993   16181 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8813,"bootTime":1710972030,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:27:23.278051   16181 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:27:23.283233   16181 out.go:177] * [no-preload-472000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:27:23.290278   16181 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:27:23.294282   16181 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:27:23.290332   16181 notify.go:220] Checking for updates...
	I0320 17:27:23.301182   16181 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:27:23.305239   16181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:27:23.306712   16181 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:27:23.310275   16181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:27:23.313574   16181 config.go:182] Loaded profile config "cert-expiration-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:27:23.313636   16181 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:27:23.313690   16181 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:27:23.315356   16181 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:27:23.322241   16181 start.go:297] selected driver: qemu2
	I0320 17:27:23.322246   16181 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:27:23.322251   16181 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:27:23.324427   16181 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:27:23.328073   16181 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:27:23.331342   16181 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:27:23.331398   16181 cni.go:84] Creating CNI manager for ""
	I0320 17:27:23.331405   16181 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:27:23.331409   16181 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0320 17:27:23.331435   16181 start.go:340] cluster config:
	{Name:no-preload-472000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-472000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vm
net/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:27:23.335851   16181 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:23.344224   16181 out.go:177] * Starting "no-preload-472000" primary control-plane node in "no-preload-472000" cluster
	I0320 17:27:23.348262   16181 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0320 17:27:23.348344   16181 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/no-preload-472000/config.json ...
	I0320 17:27:23.348361   16181 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/no-preload-472000/config.json: {Name:mk01fd4fafe6e4938f50ce563916baccbdce7640 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:27:23.348557   16181 cache.go:107] acquiring lock: {Name:mkae213047fac7884e2019be9e16312c0f1d493b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:23.348556   16181 cache.go:107] acquiring lock: {Name:mkab1eb4a442b8c135b226819c26220ea68b7813 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:23.348559   16181 cache.go:107] acquiring lock: {Name:mkd15f9db9ab214899442a9579b31a714da0ae7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:23.348605   16181 cache.go:107] acquiring lock: {Name:mkb87c2a21a18c2e2c375c77929bdedbdda1bc9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:23.348616   16181 cache.go:107] acquiring lock: {Name:mkca3e73d4318df059ac000624160bd277c99517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:23.348629   16181 cache.go:107] acquiring lock: {Name:mk21f402caad59b1c64bcf8dbe646a00ed9c9a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:23.348603   16181 cache.go:107] acquiring lock: {Name:mk4234d4f6eb905e3345e4dfa3309ebb8a02c9e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:23.348621   16181 cache.go:107] acquiring lock: {Name:mkc2be43114db7e9e585d9b5a82ee9b44e9e2cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:23.348743   16181 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0320 17:27:23.348760   16181 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0320 17:27:23.348821   16181 cache.go:115] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0320 17:27:23.348822   16181 start.go:360] acquireMachinesLock for no-preload-472000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:27:23.348830   16181 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0320 17:27:23.348880   16181 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0320 17:27:23.348885   16181 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0320 17:27:23.348931   16181 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0320 17:27:23.348943   16181 start.go:364] duration metric: took 101.375µs to acquireMachinesLock for "no-preload-472000"
	I0320 17:27:23.348831   16181 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 461.667µs
	I0320 17:27:23.348963   16181 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0320 17:27:23.348959   16181 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0320 17:27:23.348991   16181 start.go:93] Provisioning new machine with config: &{Name:no-preload-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-472000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:27:23.349022   16181 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:27:23.357260   16181 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:27:23.362834   16181 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0320 17:27:23.363554   16181 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.30.0-beta.0
	I0320 17:27:23.366984   16181 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.30.0-beta.0
	I0320 17:27:23.366998   16181 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0320 17:27:23.367037   16181 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0320 17:27:23.367154   16181 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.30.0-beta.0
	I0320 17:27:23.367192   16181 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.30.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.30.0-beta.0
	I0320 17:27:23.374980   16181 start.go:159] libmachine.API.Create for "no-preload-472000" (driver="qemu2")
	I0320 17:27:23.374997   16181 client.go:168] LocalClient.Create starting
	I0320 17:27:23.375058   16181 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:27:23.375089   16181 main.go:141] libmachine: Decoding PEM data...
	I0320 17:27:23.375100   16181 main.go:141] libmachine: Parsing certificate...
	I0320 17:27:23.375156   16181 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:27:23.375180   16181 main.go:141] libmachine: Decoding PEM data...
	I0320 17:27:23.375188   16181 main.go:141] libmachine: Parsing certificate...
	I0320 17:27:23.375530   16181 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:27:23.544180   16181 main.go:141] libmachine: Creating SSH key...
	I0320 17:27:23.666549   16181 main.go:141] libmachine: Creating Disk image...
	I0320 17:27:23.666569   16181 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:27:23.666759   16181 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/disk.qcow2
	I0320 17:27:23.679009   16181 main.go:141] libmachine: STDOUT: 
	I0320 17:27:23.679059   16181 main.go:141] libmachine: STDERR: 
	I0320 17:27:23.679122   16181 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/disk.qcow2 +20000M
	I0320 17:27:23.689916   16181 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:27:23.689931   16181 main.go:141] libmachine: STDERR: 
	I0320 17:27:23.689944   16181 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/disk.qcow2
	I0320 17:27:23.689949   16181 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:27:23.689976   16181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:9f:d2:0b:fc:9b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/disk.qcow2
	I0320 17:27:23.691799   16181 main.go:141] libmachine: STDOUT: 
	I0320 17:27:23.691815   16181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:27:23.691832   16181 client.go:171] duration metric: took 316.842834ms to LocalClient.Create
	I0320 17:27:25.370904   16181 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0
	I0320 17:27:25.413549   16181 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1
	I0320 17:27:25.458880   16181 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9
	I0320 17:27:25.493403   16181 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0320 17:27:25.499091   16181 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0
	I0320 17:27:25.536785   16181 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0
	I0320 17:27:25.547175   16181 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0
	I0320 17:27:25.634339   16181 cache.go:157] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0320 17:27:25.634391   16181 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 2.285868125s
	I0320 17:27:25.634423   16181 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0320 17:27:25.691960   16181 start.go:128] duration metric: took 2.343010916s to createHost
	I0320 17:27:25.692003   16181 start.go:83] releasing machines lock for "no-preload-472000", held for 2.343138625s
	W0320 17:27:25.692067   16181 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:27:25.706913   16181 out.go:177] * Deleting "no-preload-472000" in qemu2 ...
	W0320 17:27:25.732000   16181 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:27:25.732033   16181 start.go:728] Will try again in 5 seconds ...
	I0320 17:27:28.060511   16181 cache.go:157] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0320 17:27:28.060572   16181 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 4.712244958s
	I0320 17:27:28.060603   16181 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0320 17:27:28.294061   16181 cache.go:157] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 exists
	I0320 17:27:28.294127   16181 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0" took 4.945864375s
	I0320 17:27:28.294185   16181 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 succeeded
	I0320 17:27:28.658706   16181 cache.go:157] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 exists
	I0320 17:27:28.658773   16181 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0" took 5.310590583s
	I0320 17:27:28.658822   16181 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 succeeded
	I0320 17:27:29.804414   16181 cache.go:157] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 exists
	I0320 17:27:29.804467   16181 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0" took 6.456275417s
	I0320 17:27:29.804493   16181 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 succeeded
	I0320 17:27:30.728728   16181 cache.go:157] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 exists
	I0320 17:27:30.728801   16181 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0" took 7.380702084s
	I0320 17:27:30.728828   16181 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 succeeded
	I0320 17:27:30.733194   16181 start.go:360] acquireMachinesLock for no-preload-472000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:27:30.733497   16181 start.go:364] duration metric: took 250.625µs to acquireMachinesLock for "no-preload-472000"
	I0320 17:27:30.733587   16181 start.go:93] Provisioning new machine with config: &{Name:no-preload-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-472000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:2621
44 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:27:30.733870   16181 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:27:30.743316   16181 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:27:30.791573   16181 start.go:159] libmachine.API.Create for "no-preload-472000" (driver="qemu2")
	I0320 17:27:30.791623   16181 client.go:168] LocalClient.Create starting
	I0320 17:27:30.791740   16181 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:27:30.791810   16181 main.go:141] libmachine: Decoding PEM data...
	I0320 17:27:30.791830   16181 main.go:141] libmachine: Parsing certificate...
	I0320 17:27:30.791894   16181 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:27:30.791935   16181 main.go:141] libmachine: Decoding PEM data...
	I0320 17:27:30.791950   16181 main.go:141] libmachine: Parsing certificate...
	I0320 17:27:30.792491   16181 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:27:30.964934   16181 main.go:141] libmachine: Creating SSH key...
	I0320 17:27:31.066824   16181 main.go:141] libmachine: Creating Disk image...
	I0320 17:27:31.066829   16181 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:27:31.067021   16181 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/disk.qcow2
	I0320 17:27:31.079886   16181 main.go:141] libmachine: STDOUT: 
	I0320 17:27:31.079906   16181 main.go:141] libmachine: STDERR: 
	I0320 17:27:31.079968   16181 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/disk.qcow2 +20000M
	I0320 17:27:31.091255   16181 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:27:31.091284   16181 main.go:141] libmachine: STDERR: 
	I0320 17:27:31.091304   16181 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/disk.qcow2
	I0320 17:27:31.091308   16181 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:27:31.091349   16181 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f5:14:89:c7:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/disk.qcow2
	I0320 17:27:31.093165   16181 main.go:141] libmachine: STDOUT: 
	I0320 17:27:31.093181   16181 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:27:31.093198   16181 client.go:171] duration metric: took 301.578042ms to LocalClient.Create
	I0320 17:27:33.093575   16181 start.go:128] duration metric: took 2.35972s to createHost
	I0320 17:27:33.093646   16181 start.go:83] releasing machines lock for "no-preload-472000", held for 2.360216958s
	W0320 17:27:33.093972   16181 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-472000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:27:33.106609   16181 out.go:177] 
	W0320 17:27:33.110586   16181 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:27:33.110620   16181 out.go:239] * 
	* 
	W0320 17:27:33.113175   16181 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:27:33.120483   16181 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p no-preload-472000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000: exit status 7 (67.087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-472000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (9.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-472000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-472000 create -f testdata/busybox.yaml: exit status 1 (29.834792ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-472000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-472000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000: exit status 7 (31.453709ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-472000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000: exit status 7 (32.753125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-472000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p no-preload-472000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-472000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-472000 describe deploy/metrics-server -n kube-system: exit status 1 (27.457459ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-472000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-472000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000: exit status 7 (31.598834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-472000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p no-preload-472000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p no-preload-472000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (5.189628167s)

                                                
                                                
-- stdout --
	* [no-preload-472000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "no-preload-472000" primary control-plane node in "no-preload-472000" cluster
	* Restarting existing qemu2 VM for "no-preload-472000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "no-preload-472000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:27:36.705258   16263 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:27:36.705388   16263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:36.705391   16263 out.go:304] Setting ErrFile to fd 2...
	I0320 17:27:36.705394   16263 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:36.705510   16263 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:27:36.706507   16263 out.go:298] Setting JSON to false
	I0320 17:27:36.722952   16263 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8826,"bootTime":1710972030,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:27:36.723015   16263 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:27:36.728124   16263 out.go:177] * [no-preload-472000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:27:36.735288   16263 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:27:36.735346   16263 notify.go:220] Checking for updates...
	I0320 17:27:36.738242   16263 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:27:36.742257   16263 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:27:36.746292   16263 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:27:36.749260   16263 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:27:36.752260   16263 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:27:36.755655   16263 config.go:182] Loaded profile config "no-preload-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0320 17:27:36.755917   16263 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:27:36.760201   16263 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:27:36.767239   16263 start.go:297] selected driver: qemu2
	I0320 17:27:36.767246   16263 start.go:901] validating driver "qemu2" against &{Name:no-preload-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-472000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false Ext
raDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:27:36.767289   16263 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:27:36.769522   16263 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:27:36.769568   16263 cni.go:84] Creating CNI manager for ""
	I0320 17:27:36.769576   16263 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:27:36.769598   16263 start.go:340] cluster config:
	{Name:no-preload-472000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:no-preload-472000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-ho
st Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:27:36.773965   16263 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:36.781251   16263 out.go:177] * Starting "no-preload-472000" primary control-plane node in "no-preload-472000" cluster
	I0320 17:27:36.785123   16263 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0320 17:27:36.785201   16263 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/no-preload-472000/config.json ...
	I0320 17:27:36.785248   16263 cache.go:107] acquiring lock: {Name:mkc2be43114db7e9e585d9b5a82ee9b44e9e2cd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:36.785263   16263 cache.go:107] acquiring lock: {Name:mkd15f9db9ab214899442a9579b31a714da0ae7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:36.785277   16263 cache.go:107] acquiring lock: {Name:mkae213047fac7884e2019be9e16312c0f1d493b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:36.785320   16263 cache.go:115] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0320 17:27:36.785330   16263 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 85.375µs
	I0320 17:27:36.785336   16263 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0320 17:27:36.785336   16263 cache.go:115] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 exists
	I0320 17:27:36.785353   16263 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0" took 103.708µs
	I0320 17:27:36.785358   16263 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.30.0-beta.0 succeeded
	I0320 17:27:36.785344   16263 cache.go:115] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 exists
	I0320 17:27:36.785365   16263 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0" took 98.959µs
	I0320 17:27:36.785368   16263 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.30.0-beta.0 succeeded
	I0320 17:27:36.785343   16263 cache.go:107] acquiring lock: {Name:mk21f402caad59b1c64bcf8dbe646a00ed9c9a51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:36.785367   16263 cache.go:107] acquiring lock: {Name:mkab1eb4a442b8c135b226819c26220ea68b7813 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:36.785401   16263 cache.go:115] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 exists
	I0320 17:27:36.785406   16263 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0" took 64.208µs
	I0320 17:27:36.785411   16263 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.30.0-beta.0 succeeded
	I0320 17:27:36.785357   16263 cache.go:107] acquiring lock: {Name:mkca3e73d4318df059ac000624160bd277c99517 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:36.785404   16263 cache.go:107] acquiring lock: {Name:mk4234d4f6eb905e3345e4dfa3309ebb8a02c9e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:36.785453   16263 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.12-0
	I0320 17:27:36.785429   16263 cache.go:115] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0320 17:27:36.785492   16263 cache.go:115] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 exists
	I0320 17:27:36.785453   16263 cache.go:107] acquiring lock: {Name:mkb87c2a21a18c2e2c375c77929bdedbdda1bc9c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:36.785500   16263 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.30.0-beta.0" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0" took 108.25µs
	I0320 17:27:36.785505   16263 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.30.0-beta.0 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.30.0-beta.0 succeeded
	I0320 17:27:36.785524   16263 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1" took 130.625µs
	I0320 17:27:36.785541   16263 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0320 17:27:36.785555   16263 cache.go:115] /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 exists
	I0320 17:27:36.785560   16263 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9" took 146.25µs
	I0320 17:27:36.785572   16263 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/pause_3.9 succeeded
	I0320 17:27:36.785586   16263 start.go:360] acquireMachinesLock for no-preload-472000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:27:36.785619   16263 start.go:364] duration metric: took 26.709µs to acquireMachinesLock for "no-preload-472000"
	I0320 17:27:36.785628   16263 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:27:36.785635   16263 fix.go:54] fixHost starting: 
	I0320 17:27:36.785760   16263 fix.go:112] recreateIfNeeded on no-preload-472000: state=Stopped err=<nil>
	W0320 17:27:36.785768   16263 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:27:36.793228   16263 out.go:177] * Restarting existing qemu2 VM for "no-preload-472000" ...
	I0320 17:27:36.796232   16263 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f5:14:89:c7:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/disk.qcow2
	I0320 17:27:36.796909   16263 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.12-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.12-0
	I0320 17:27:36.798683   16263 main.go:141] libmachine: STDOUT: 
	I0320 17:27:36.798712   16263 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:27:36.798756   16263 fix.go:56] duration metric: took 13.121167ms for fixHost
	I0320 17:27:36.798761   16263 start.go:83] releasing machines lock for "no-preload-472000", held for 13.139292ms
	W0320 17:27:36.798768   16263 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:27:36.798799   16263 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:27:36.798804   16263 start.go:728] Will try again in 5 seconds ...
	I0320 17:27:38.734739   16263 cache.go:162] opening:  /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.5.12-0
	I0320 17:27:41.799799   16263 start.go:360] acquireMachinesLock for no-preload-472000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:27:41.800145   16263 start.go:364] duration metric: took 262.125µs to acquireMachinesLock for "no-preload-472000"
	I0320 17:27:41.800274   16263 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:27:41.800304   16263 fix.go:54] fixHost starting: 
	I0320 17:27:41.800955   16263 fix.go:112] recreateIfNeeded on no-preload-472000: state=Stopped err=<nil>
	W0320 17:27:41.800981   16263 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:27:41.806645   16263 out.go:177] * Restarting existing qemu2 VM for "no-preload-472000" ...
	I0320 17:27:41.813635   16263 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/qemu.pid -device virtio-net-pci,netdev=net0,mac=7a:f5:14:89:c7:2d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/no-preload-472000/disk.qcow2
	I0320 17:27:41.824297   16263 main.go:141] libmachine: STDOUT: 
	I0320 17:27:41.824451   16263 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:27:41.824528   16263 fix.go:56] duration metric: took 24.229291ms for fixHost
	I0320 17:27:41.824547   16263 start.go:83] releasing machines lock for "no-preload-472000", held for 24.381375ms
	W0320 17:27:41.824757   16263 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p no-preload-472000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p no-preload-472000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:27:41.833559   16263 out.go:177] 
	W0320 17:27:41.837654   16263 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:27:41.837686   16263 out.go:239] * 
	* 
	W0320 17:27:41.840411   16263 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:27:41.849527   16263 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p no-preload-472000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000: exit status 7 (69.895625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-472000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-472000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000: exit status 7 (33.696375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-472000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-472000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-472000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-472000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.711333ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-472000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-472000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000: exit status 7 (31.487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-472000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p no-preload-472000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-beta.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000: exit status 7 (31.637667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-472000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p no-preload-472000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p no-preload-472000 --alsologtostderr -v=1: exit status 83 (43.09375ms)

                                                
                                                
-- stdout --
	* The control-plane node no-preload-472000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p no-preload-472000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:27:42.133347   16289 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:27:42.133528   16289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:42.133531   16289 out.go:304] Setting ErrFile to fd 2...
	I0320 17:27:42.133533   16289 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:42.133663   16289 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:27:42.133881   16289 out.go:298] Setting JSON to false
	I0320 17:27:42.133890   16289 mustload.go:65] Loading cluster: no-preload-472000
	I0320 17:27:42.134077   16289 config.go:182] Loaded profile config "no-preload-472000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0320 17:27:42.138210   16289 out.go:177] * The control-plane node no-preload-472000 host is not running: state=Stopped
	I0320 17:27:42.142116   16289 out.go:177]   To start a cluster, run: "minikube start -p no-preload-472000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p no-preload-472000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000: exit status 7 (31.664375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-472000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000: exit status 7 (32.091834ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-472000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (9.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-878000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-878000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (9.821780833s)

                                                
                                                
-- stdout --
	* [embed-certs-878000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "embed-certs-878000" primary control-plane node in "embed-certs-878000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "embed-certs-878000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:27:42.608400   16312 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:27:42.608540   16312 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:42.608544   16312 out.go:304] Setting ErrFile to fd 2...
	I0320 17:27:42.608546   16312 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:42.608677   16312 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:27:42.609827   16312 out.go:298] Setting JSON to false
	I0320 17:27:42.626076   16312 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8832,"bootTime":1710972030,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:27:42.626145   16312 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:27:42.631744   16312 out.go:177] * [embed-certs-878000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:27:42.637734   16312 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:27:42.641778   16312 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:27:42.637801   16312 notify.go:220] Checking for updates...
	I0320 17:27:42.647691   16312 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:27:42.654724   16312 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:27:42.657672   16312 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:27:42.664712   16312 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:27:42.668104   16312 config.go:182] Loaded profile config "cert-expiration-498000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:27:42.668174   16312 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:27:42.668222   16312 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:27:42.672703   16312 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:27:42.679674   16312 start.go:297] selected driver: qemu2
	I0320 17:27:42.679689   16312 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:27:42.679695   16312 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:27:42.682174   16312 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:27:42.685729   16312 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:27:42.688816   16312 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:27:42.688858   16312 cni.go:84] Creating CNI manager for ""
	I0320 17:27:42.688868   16312 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:27:42.688873   16312 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0320 17:27:42.688900   16312 start.go:340] cluster config:
	{Name:embed-certs-878000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socke
t_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:27:42.693593   16312 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:42.704788   16312 out.go:177] * Starting "embed-certs-878000" primary control-plane node in "embed-certs-878000" cluster
	I0320 17:27:42.708734   16312 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:27:42.708752   16312 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:27:42.708767   16312 cache.go:56] Caching tarball of preloaded images
	I0320 17:27:42.708832   16312 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:27:42.708846   16312 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:27:42.708922   16312 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/embed-certs-878000/config.json ...
	I0320 17:27:42.708934   16312 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/embed-certs-878000/config.json: {Name:mk9e8534a7b8384106beb69d530010919e362c21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:27:42.709172   16312 start.go:360] acquireMachinesLock for embed-certs-878000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:27:42.709212   16312 start.go:364] duration metric: took 30.375µs to acquireMachinesLock for "embed-certs-878000"
	I0320 17:27:42.709229   16312 start.go:93] Provisioning new machine with config: &{Name:embed-certs-878000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:embed-certs-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:27:42.709280   16312 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:27:42.717705   16312 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:27:42.737486   16312 start.go:159] libmachine.API.Create for "embed-certs-878000" (driver="qemu2")
	I0320 17:27:42.737524   16312 client.go:168] LocalClient.Create starting
	I0320 17:27:42.737603   16312 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:27:42.737635   16312 main.go:141] libmachine: Decoding PEM data...
	I0320 17:27:42.737645   16312 main.go:141] libmachine: Parsing certificate...
	I0320 17:27:42.737693   16312 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:27:42.737721   16312 main.go:141] libmachine: Decoding PEM data...
	I0320 17:27:42.737728   16312 main.go:141] libmachine: Parsing certificate...
	I0320 17:27:42.738114   16312 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:27:42.906565   16312 main.go:141] libmachine: Creating SSH key...
	I0320 17:27:42.981711   16312 main.go:141] libmachine: Creating Disk image...
	I0320 17:27:42.981718   16312 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:27:42.981888   16312 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/disk.qcow2
	I0320 17:27:42.993948   16312 main.go:141] libmachine: STDOUT: 
	I0320 17:27:42.993976   16312 main.go:141] libmachine: STDERR: 
	I0320 17:27:42.994023   16312 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/disk.qcow2 +20000M
	I0320 17:27:43.004533   16312 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:27:43.004558   16312 main.go:141] libmachine: STDERR: 
	I0320 17:27:43.004573   16312 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/disk.qcow2
	I0320 17:27:43.004577   16312 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:27:43.004608   16312 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=a2:08:ed:af:d9:0b -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/disk.qcow2
	I0320 17:27:43.006377   16312 main.go:141] libmachine: STDOUT: 
	I0320 17:27:43.006396   16312 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:27:43.006416   16312 client.go:171] duration metric: took 268.8965ms to LocalClient.Create
	I0320 17:27:45.008664   16312 start.go:128] duration metric: took 2.299442917s to createHost
	I0320 17:27:45.008731   16312 start.go:83] releasing machines lock for "embed-certs-878000", held for 2.29959825s
	W0320 17:27:45.008788   16312 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:27:45.021044   16312 out.go:177] * Deleting "embed-certs-878000" in qemu2 ...
	W0320 17:27:45.047210   16312 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:27:45.047240   16312 start.go:728] Will try again in 5 seconds ...
	I0320 17:27:50.049302   16312 start.go:360] acquireMachinesLock for embed-certs-878000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:27:50.049704   16312 start.go:364] duration metric: took 309.5µs to acquireMachinesLock for "embed-certs-878000"
	I0320 17:27:50.049826   16312 start.go:93] Provisioning new machine with config: &{Name:embed-certs-878000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.29.3 ClusterName:embed-certs-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:27:50.050125   16312 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:27:50.059712   16312 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:27:50.108214   16312 start.go:159] libmachine.API.Create for "embed-certs-878000" (driver="qemu2")
	I0320 17:27:50.108262   16312 client.go:168] LocalClient.Create starting
	I0320 17:27:50.108347   16312 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:27:50.108405   16312 main.go:141] libmachine: Decoding PEM data...
	I0320 17:27:50.108423   16312 main.go:141] libmachine: Parsing certificate...
	I0320 17:27:50.108474   16312 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:27:50.108515   16312 main.go:141] libmachine: Decoding PEM data...
	I0320 17:27:50.108525   16312 main.go:141] libmachine: Parsing certificate...
	I0320 17:27:50.109026   16312 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:27:50.265412   16312 main.go:141] libmachine: Creating SSH key...
	I0320 17:27:50.328937   16312 main.go:141] libmachine: Creating Disk image...
	I0320 17:27:50.328942   16312 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:27:50.329122   16312 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/disk.qcow2
	I0320 17:27:50.341766   16312 main.go:141] libmachine: STDOUT: 
	I0320 17:27:50.341791   16312 main.go:141] libmachine: STDERR: 
	I0320 17:27:50.341853   16312 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/disk.qcow2 +20000M
	I0320 17:27:50.352915   16312 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:27:50.352930   16312 main.go:141] libmachine: STDERR: 
	I0320 17:27:50.352949   16312 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/disk.qcow2
	I0320 17:27:50.352953   16312 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:27:50.352982   16312 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:e0:22:f4:75:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/disk.qcow2
	I0320 17:27:50.354754   16312 main.go:141] libmachine: STDOUT: 
	I0320 17:27:50.354772   16312 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:27:50.354789   16312 client.go:171] duration metric: took 246.530708ms to LocalClient.Create
	I0320 17:27:52.356898   16312 start.go:128] duration metric: took 2.306831833s to createHost
	I0320 17:27:52.357026   16312 start.go:83] releasing machines lock for "embed-certs-878000", held for 2.3073875s
	W0320 17:27:52.357437   16312 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-878000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-878000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:27:52.371127   16312 out.go:177] 
	W0320 17:27:52.375285   16312 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:27:52.375321   16312 out.go:239] * 
	* 
	W0320 17:27:52.377809   16312 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:27:52.386060   16312 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p embed-certs-878000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000: exit status 7 (68.471084ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (9.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-878000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-878000 create -f testdata/busybox.yaml: exit status 1 (29.478958ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-878000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-878000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000: exit status 7 (31.727917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-878000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000: exit status 7 (31.610791ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p embed-certs-878000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-878000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-878000 describe deploy/metrics-server -n kube-system: exit status 1 (26.379333ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-878000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-878000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000: exit status 7 (32.257208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (5.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p embed-certs-878000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p embed-certs-878000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (5.785772333s)

                                                
                                                
-- stdout --
	* [embed-certs-878000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "embed-certs-878000" primary control-plane node in "embed-certs-878000" cluster
	* Restarting existing qemu2 VM for "embed-certs-878000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "embed-certs-878000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:27:56.225974   16373 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:27:56.226104   16373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:56.226107   16373 out.go:304] Setting ErrFile to fd 2...
	I0320 17:27:56.226109   16373 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:56.226226   16373 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:27:56.227226   16373 out.go:298] Setting JSON to false
	I0320 17:27:56.243319   16373 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8846,"bootTime":1710972030,"procs":518,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:27:56.243387   16373 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:27:56.248527   16373 out.go:177] * [embed-certs-878000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:27:56.255500   16373 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:27:56.258484   16373 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:27:56.255554   16373 notify.go:220] Checking for updates...
	I0320 17:27:56.266455   16373 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:27:56.269492   16373 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:27:56.272505   16373 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:27:56.275471   16373 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:27:56.278814   16373 config.go:182] Loaded profile config "embed-certs-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:27:56.279070   16373 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:27:56.283512   16373 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:27:56.290491   16373 start.go:297] selected driver: qemu2
	I0320 17:27:56.290498   16373 start.go:901] validating driver "qemu2" against &{Name:embed-certs-878000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:embed-certs-878000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:27:56.290565   16373 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:27:56.292861   16373 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:27:56.292908   16373 cni.go:84] Creating CNI manager for ""
	I0320 17:27:56.292916   16373 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:27:56.292944   16373 start.go:340] cluster config:
	{Name:embed-certs-878000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:embed-certs-878000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVer
sion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:27:56.297313   16373 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:56.304457   16373 out.go:177] * Starting "embed-certs-878000" primary control-plane node in "embed-certs-878000" cluster
	I0320 17:27:56.308479   16373 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:27:56.308492   16373 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:27:56.308501   16373 cache.go:56] Caching tarball of preloaded images
	I0320 17:27:56.308546   16373 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:27:56.308552   16373 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:27:56.308609   16373 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/embed-certs-878000/config.json ...
	I0320 17:27:56.309109   16373 start.go:360] acquireMachinesLock for embed-certs-878000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:27:56.309137   16373 start.go:364] duration metric: took 21.459µs to acquireMachinesLock for "embed-certs-878000"
	I0320 17:27:56.309145   16373 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:27:56.309152   16373 fix.go:54] fixHost starting: 
	I0320 17:27:56.309287   16373 fix.go:112] recreateIfNeeded on embed-certs-878000: state=Stopped err=<nil>
	W0320 17:27:56.309295   16373 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:27:56.313469   16373 out.go:177] * Restarting existing qemu2 VM for "embed-certs-878000" ...
	I0320 17:27:56.320460   16373 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:e0:22:f4:75:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/disk.qcow2
	I0320 17:27:56.322464   16373 main.go:141] libmachine: STDOUT: 
	I0320 17:27:56.322488   16373 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:27:56.322518   16373 fix.go:56] duration metric: took 13.366833ms for fixHost
	I0320 17:27:56.322525   16373 start.go:83] releasing machines lock for "embed-certs-878000", held for 13.384167ms
	W0320 17:27:56.322532   16373 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:27:56.322572   16373 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:27:56.322578   16373 start.go:728] Will try again in 5 seconds ...
	I0320 17:28:01.324172   16373 start.go:360] acquireMachinesLock for embed-certs-878000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:28:01.916578   16373 start.go:364] duration metric: took 592.213334ms to acquireMachinesLock for "embed-certs-878000"
	I0320 17:28:01.916742   16373 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:28:01.916763   16373 fix.go:54] fixHost starting: 
	I0320 17:28:01.917495   16373 fix.go:112] recreateIfNeeded on embed-certs-878000: state=Stopped err=<nil>
	W0320 17:28:01.917522   16373 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:28:01.926087   16373 out.go:177] * Restarting existing qemu2 VM for "embed-certs-878000" ...
	I0320 17:28:01.935328   16373 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:e0:22:f4:75:3d -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/embed-certs-878000/disk.qcow2
	I0320 17:28:01.943289   16373 main.go:141] libmachine: STDOUT: 
	I0320 17:28:01.943355   16373 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:28:01.943418   16373 fix.go:56] duration metric: took 26.659167ms for fixHost
	I0320 17:28:01.943434   16373 start.go:83] releasing machines lock for "embed-certs-878000", held for 26.829125ms
	W0320 17:28:01.943567   16373 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p embed-certs-878000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p embed-certs-878000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:28:01.951155   16373 out.go:177] 
	W0320 17:28:01.956222   16373 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:28:01.956255   16373 out.go:239] * 
	* 
	W0320 17:28:01.957662   16373 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:28:01.971096   16373 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p embed-certs-878000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000: exit status 7 (42.036167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (5.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-391000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-391000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (9.800114292s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-391000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "default-k8s-diff-port-391000" primary control-plane node in "default-k8s-diff-port-391000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "default-k8s-diff-port-391000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:27:59.550408   16399 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:27:59.550545   16399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:59.550549   16399 out.go:304] Setting ErrFile to fd 2...
	I0320 17:27:59.550551   16399 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:27:59.550682   16399 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:27:59.551779   16399 out.go:298] Setting JSON to false
	I0320 17:27:59.567973   16399 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8849,"bootTime":1710972030,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:27:59.568036   16399 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:27:59.573318   16399 out.go:177] * [default-k8s-diff-port-391000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:27:59.579225   16399 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:27:59.579289   16399 notify.go:220] Checking for updates...
	I0320 17:27:59.583235   16399 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:27:59.587191   16399 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:27:59.590201   16399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:27:59.593223   16399 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:27:59.596276   16399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:27:59.599561   16399 config.go:182] Loaded profile config "embed-certs-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:27:59.599620   16399 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:27:59.599671   16399 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:27:59.604209   16399 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:27:59.611198   16399 start.go:297] selected driver: qemu2
	I0320 17:27:59.611203   16399 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:27:59.611209   16399 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:27:59.613508   16399 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:27:59.617204   16399 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:27:59.620255   16399 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:27:59.620306   16399 cni.go:84] Creating CNI manager for ""
	I0320 17:27:59.620313   16399 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:27:59.620322   16399 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0320 17:27:59.620349   16399 start.go:340] cluster config:
	{Name:default-k8s-diff-port-391000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-391000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/s
ocket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:27:59.624941   16399 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:27:59.632154   16399 out.go:177] * Starting "default-k8s-diff-port-391000" primary control-plane node in "default-k8s-diff-port-391000" cluster
	I0320 17:27:59.636191   16399 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:27:59.636209   16399 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:27:59.636218   16399 cache.go:56] Caching tarball of preloaded images
	I0320 17:27:59.636292   16399 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:27:59.636304   16399 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:27:59.636386   16399 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/default-k8s-diff-port-391000/config.json ...
	I0320 17:27:59.636401   16399 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/default-k8s-diff-port-391000/config.json: {Name:mk2d29891d89870d52217c8a32f727ccd729d2cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:27:59.636719   16399 start.go:360] acquireMachinesLock for default-k8s-diff-port-391000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:27:59.636754   16399 start.go:364] duration metric: took 26.958µs to acquireMachinesLock for "default-k8s-diff-port-391000"
	I0320 17:27:59.636766   16399 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-391000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-391000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:27:59.636793   16399 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:27:59.645175   16399 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:27:59.663123   16399 start.go:159] libmachine.API.Create for "default-k8s-diff-port-391000" (driver="qemu2")
	I0320 17:27:59.663154   16399 client.go:168] LocalClient.Create starting
	I0320 17:27:59.663223   16399 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:27:59.663254   16399 main.go:141] libmachine: Decoding PEM data...
	I0320 17:27:59.663266   16399 main.go:141] libmachine: Parsing certificate...
	I0320 17:27:59.663316   16399 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:27:59.663339   16399 main.go:141] libmachine: Decoding PEM data...
	I0320 17:27:59.663347   16399 main.go:141] libmachine: Parsing certificate...
	I0320 17:27:59.663726   16399 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:27:59.806351   16399 main.go:141] libmachine: Creating SSH key...
	I0320 17:27:59.888103   16399 main.go:141] libmachine: Creating Disk image...
	I0320 17:27:59.888108   16399 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:27:59.888286   16399 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/disk.qcow2
	I0320 17:27:59.900953   16399 main.go:141] libmachine: STDOUT: 
	I0320 17:27:59.900967   16399 main.go:141] libmachine: STDERR: 
	I0320 17:27:59.901011   16399 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/disk.qcow2 +20000M
	I0320 17:27:59.912233   16399 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:27:59.912258   16399 main.go:141] libmachine: STDERR: 
	I0320 17:27:59.912275   16399 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/disk.qcow2
	I0320 17:27:59.912280   16399 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:27:59.912332   16399 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/qemu.pid -device virtio-net-pci,netdev=net0,mac=e6:4b:cf:f0:25:a0 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/disk.qcow2
	I0320 17:27:59.914122   16399 main.go:141] libmachine: STDOUT: 
	I0320 17:27:59.914140   16399 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:27:59.914158   16399 client.go:171] duration metric: took 251.008125ms to LocalClient.Create
	I0320 17:28:01.916316   16399 start.go:128] duration metric: took 2.279587375s to createHost
	I0320 17:28:01.916403   16399 start.go:83] releasing machines lock for "default-k8s-diff-port-391000", held for 2.279726958s
	W0320 17:28:01.916478   16399 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:28:01.930972   16399 out.go:177] * Deleting "default-k8s-diff-port-391000" in qemu2 ...
	W0320 17:28:01.967126   16399 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:28:01.967147   16399 start.go:728] Will try again in 5 seconds ...
	I0320 17:28:06.967634   16399 start.go:360] acquireMachinesLock for default-k8s-diff-port-391000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:28:06.968101   16399 start.go:364] duration metric: took 348.541µs to acquireMachinesLock for "default-k8s-diff-port-391000"
	I0320 17:28:06.968269   16399 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-391000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-391000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:28:06.968610   16399 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:28:06.974338   16399 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:28:07.024352   16399 start.go:159] libmachine.API.Create for "default-k8s-diff-port-391000" (driver="qemu2")
	I0320 17:28:07.024401   16399 client.go:168] LocalClient.Create starting
	I0320 17:28:07.024529   16399 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:28:07.024600   16399 main.go:141] libmachine: Decoding PEM data...
	I0320 17:28:07.024617   16399 main.go:141] libmachine: Parsing certificate...
	I0320 17:28:07.024675   16399 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:28:07.024723   16399 main.go:141] libmachine: Decoding PEM data...
	I0320 17:28:07.024746   16399 main.go:141] libmachine: Parsing certificate...
	I0320 17:28:07.025446   16399 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:28:07.179647   16399 main.go:141] libmachine: Creating SSH key...
	I0320 17:28:07.248763   16399 main.go:141] libmachine: Creating Disk image...
	I0320 17:28:07.248769   16399 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:28:07.248950   16399 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/disk.qcow2
	I0320 17:28:07.261368   16399 main.go:141] libmachine: STDOUT: 
	I0320 17:28:07.261397   16399 main.go:141] libmachine: STDERR: 
	I0320 17:28:07.261459   16399 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/disk.qcow2 +20000M
	I0320 17:28:07.272048   16399 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:28:07.272064   16399 main.go:141] libmachine: STDERR: 
	I0320 17:28:07.272085   16399 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/disk.qcow2
	I0320 17:28:07.272090   16399 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:28:07.272119   16399 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f9:70:3f:73:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/disk.qcow2
	I0320 17:28:07.273828   16399 main.go:141] libmachine: STDOUT: 
	I0320 17:28:07.273865   16399 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:28:07.273877   16399 client.go:171] duration metric: took 249.479666ms to LocalClient.Create
	I0320 17:28:09.275975   16399 start.go:128] duration metric: took 2.307421958s to createHost
	I0320 17:28:09.276019   16399 start.go:83] releasing machines lock for "default-k8s-diff-port-391000", held for 2.307965583s
	W0320 17:28:09.276357   16399 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-391000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-391000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:28:09.285913   16399 out.go:177] 
	W0320 17:28:09.291203   16399 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:28:09.291230   16399 out.go:239] * 
	* 
	W0320 17:28:09.293634   16399 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:28:09.305060   16399 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-391000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000: exit status 7 (67.294083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (9.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-878000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000: exit status 7 (36.000333ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-878000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-878000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-878000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.435667ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-878000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-878000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000: exit status 7 (30.701583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p embed-certs-878000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000: exit status 7 (31.49875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p embed-certs-878000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p embed-certs-878000 --alsologtostderr -v=1: exit status 83 (43.823417ms)

                                                
                                                
-- stdout --
	* The control-plane node embed-certs-878000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p embed-certs-878000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:28:02.220410   16422 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:28:02.220552   16422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:28:02.220556   16422 out.go:304] Setting ErrFile to fd 2...
	I0320 17:28:02.220558   16422 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:28:02.220691   16422 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:28:02.220915   16422 out.go:298] Setting JSON to false
	I0320 17:28:02.220925   16422 mustload.go:65] Loading cluster: embed-certs-878000
	I0320 17:28:02.221112   16422 config.go:182] Loaded profile config "embed-certs-878000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:28:02.225109   16422 out.go:177] * The control-plane node embed-certs-878000 host is not running: state=Stopped
	I0320 17:28:02.229068   16422 out.go:177]   To start a cluster, run: "minikube start -p embed-certs-878000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p embed-certs-878000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000: exit status 7 (31.345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-878000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000: exit status 7 (31.548ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-878000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (9.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-054000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-054000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (9.82765225s)

                                                
                                                
-- stdout --
	* [newest-cni-054000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on user configuration
	* Automatically selected the socket_vmnet network
	* Starting "newest-cni-054000" primary control-plane node in "newest-cni-054000" cluster
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Deleting "newest-cni-054000" in qemu2 ...
	* Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:28:02.697309   16445 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:28:02.697440   16445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:28:02.697443   16445 out.go:304] Setting ErrFile to fd 2...
	I0320 17:28:02.697445   16445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:28:02.697567   16445 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:28:02.698637   16445 out.go:298] Setting JSON to false
	I0320 17:28:02.714932   16445 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8852,"bootTime":1710972030,"procs":516,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:28:02.714999   16445 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:28:02.719995   16445 out.go:177] * [newest-cni-054000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:28:02.726831   16445 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:28:02.726866   16445 notify.go:220] Checking for updates...
	I0320 17:28:02.733954   16445 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:28:02.736929   16445 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:28:02.739941   16445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:28:02.742981   16445 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:28:02.744515   16445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:28:02.748234   16445 config.go:182] Loaded profile config "default-k8s-diff-port-391000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:28:02.748294   16445 config.go:182] Loaded profile config "multinode-894000-m01": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:28:02.748341   16445 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:28:02.752939   16445 out.go:177] * Using the qemu2 driver based on user configuration
	I0320 17:28:02.757917   16445 start.go:297] selected driver: qemu2
	I0320 17:28:02.757925   16445 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:28:02.757931   16445 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:28:02.760263   16445 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0320 17:28:02.760286   16445 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0320 17:28:02.763979   16445 out.go:177] * Automatically selected the socket_vmnet network
	I0320 17:28:02.769030   16445 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0320 17:28:02.769075   16445 cni.go:84] Creating CNI manager for ""
	I0320 17:28:02.769084   16445 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:28:02.769088   16445 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0320 17:28:02.769110   16445 start.go:340] cluster config:
	{Name:newest-cni-054000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-054000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:28:02.773835   16445 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:28:02.781914   16445 out.go:177] * Starting "newest-cni-054000" primary control-plane node in "newest-cni-054000" cluster
	I0320 17:28:02.785967   16445 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0320 17:28:02.785983   16445 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0320 17:28:02.785991   16445 cache.go:56] Caching tarball of preloaded images
	I0320 17:28:02.786053   16445 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:28:02.786060   16445 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0320 17:28:02.786135   16445 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/newest-cni-054000/config.json ...
	I0320 17:28:02.786147   16445 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/newest-cni-054000/config.json: {Name:mk7516769ee31db6118fd4023d649905891fd849 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:28:02.786393   16445 start.go:360] acquireMachinesLock for newest-cni-054000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:28:02.786429   16445 start.go:364] duration metric: took 30.25µs to acquireMachinesLock for "newest-cni-054000"
	I0320 17:28:02.786447   16445 start.go:93] Provisioning new machine with config: &{Name:newest-cni-054000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-054000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:28:02.786483   16445 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:28:02.794942   16445 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:28:02.812950   16445 start.go:159] libmachine.API.Create for "newest-cni-054000" (driver="qemu2")
	I0320 17:28:02.812975   16445 client.go:168] LocalClient.Create starting
	I0320 17:28:02.813038   16445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:28:02.813076   16445 main.go:141] libmachine: Decoding PEM data...
	I0320 17:28:02.813090   16445 main.go:141] libmachine: Parsing certificate...
	I0320 17:28:02.813138   16445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:28:02.813163   16445 main.go:141] libmachine: Decoding PEM data...
	I0320 17:28:02.813172   16445 main.go:141] libmachine: Parsing certificate...
	I0320 17:28:02.813552   16445 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:28:02.956910   16445 main.go:141] libmachine: Creating SSH key...
	I0320 17:28:02.999287   16445 main.go:141] libmachine: Creating Disk image...
	I0320 17:28:02.999292   16445 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:28:02.999470   16445 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/disk.qcow2
	I0320 17:28:03.011613   16445 main.go:141] libmachine: STDOUT: 
	I0320 17:28:03.011636   16445 main.go:141] libmachine: STDERR: 
	I0320 17:28:03.011704   16445 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/disk.qcow2 +20000M
	I0320 17:28:03.022322   16445 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:28:03.022339   16445 main.go:141] libmachine: STDERR: 
	I0320 17:28:03.022363   16445 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/disk.qcow2
	I0320 17:28:03.022369   16445 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:28:03.022402   16445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/qemu.pid -device virtio-net-pci,netdev=net0,mac=9e:29:b7:ff:28:d3 -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/disk.qcow2
	I0320 17:28:03.024069   16445 main.go:141] libmachine: STDOUT: 
	I0320 17:28:03.024088   16445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:28:03.024106   16445 client.go:171] duration metric: took 211.133709ms to LocalClient.Create
	I0320 17:28:05.024513   16445 start.go:128] duration metric: took 2.238083542s to createHost
	I0320 17:28:05.024622   16445 start.go:83] releasing machines lock for "newest-cni-054000", held for 2.238269209s
	W0320 17:28:05.024751   16445 start.go:713] error starting host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:28:05.035159   16445 out.go:177] * Deleting "newest-cni-054000" in qemu2 ...
	W0320 17:28:05.061130   16445 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:28:05.061163   16445 start.go:728] Will try again in 5 seconds ...
	I0320 17:28:10.061578   16445 start.go:360] acquireMachinesLock for newest-cni-054000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:28:10.061907   16445 start.go:364] duration metric: took 260.083µs to acquireMachinesLock for "newest-cni-054000"
	I0320 17:28:10.062075   16445 start.go:93] Provisioning new machine with config: &{Name:newest-cni-054000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-054000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0320 17:28:10.062403   16445 start.go:125] createHost starting for "" (driver="qemu2")
	I0320 17:28:10.067369   16445 out.go:204] * Creating qemu2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0320 17:28:10.115738   16445 start.go:159] libmachine.API.Create for "newest-cni-054000" (driver="qemu2")
	I0320 17:28:10.115939   16445 client.go:168] LocalClient.Create starting
	I0320 17:28:10.116040   16445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/ca.pem
	I0320 17:28:10.116089   16445 main.go:141] libmachine: Decoding PEM data...
	I0320 17:28:10.116108   16445 main.go:141] libmachine: Parsing certificate...
	I0320 17:28:10.116169   16445 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/17713-12248/.minikube/certs/cert.pem
	I0320 17:28:10.116197   16445 main.go:141] libmachine: Decoding PEM data...
	I0320 17:28:10.116207   16445 main.go:141] libmachine: Parsing certificate...
	I0320 17:28:10.116746   16445 main.go:141] libmachine: Downloading /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso...
	I0320 17:28:10.270212   16445 main.go:141] libmachine: Creating SSH key...
	I0320 17:28:10.421773   16445 main.go:141] libmachine: Creating Disk image...
	I0320 17:28:10.421780   16445 main.go:141] libmachine: Creating 20000 MB hard disk image...
	I0320 17:28:10.421988   16445 main.go:141] libmachine: executing: qemu-img convert -f raw -O qcow2 /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/disk.qcow2.raw /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/disk.qcow2
	I0320 17:28:10.434163   16445 main.go:141] libmachine: STDOUT: 
	I0320 17:28:10.434183   16445 main.go:141] libmachine: STDERR: 
	I0320 17:28:10.434235   16445 main.go:141] libmachine: executing: qemu-img resize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/disk.qcow2 +20000M
	I0320 17:28:10.444918   16445 main.go:141] libmachine: STDOUT: Image resized.
	
	I0320 17:28:10.444933   16445 main.go:141] libmachine: STDERR: 
	I0320 17:28:10.444943   16445 main.go:141] libmachine: DONE writing to /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/disk.qcow2.raw and /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/disk.qcow2
	I0320 17:28:10.444948   16445 main.go:141] libmachine: Starting QEMU VM...
	I0320 17:28:10.444988   16445 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:b0:f2:03:05:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/disk.qcow2
	I0320 17:28:10.446721   16445 main.go:141] libmachine: STDOUT: 
	I0320 17:28:10.446738   16445 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:28:10.446751   16445 client.go:171] duration metric: took 330.819333ms to LocalClient.Create
	I0320 17:28:12.448884   16445 start.go:128] duration metric: took 2.386529458s to createHost
	I0320 17:28:12.448969   16445 start.go:83] releasing machines lock for "newest-cni-054000", held for 2.387131166s
	W0320 17:28:12.449265   16445 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-054000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-054000" may fix it: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:28:12.463780   16445 out.go:177] 
	W0320 17:28:12.468095   16445 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: creating host: create: creating: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:28:12.468123   16445 out.go:239] * 
	* 
	W0320 17:28:12.470935   16445 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:28:12.479896   16445 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-arm64 start -p newest-cni-054000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-054000 -n newest-cni-054000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-054000 -n newest-cni-054000: exit status 7 (69.433083ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-054000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (9.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-391000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-391000 create -f testdata/busybox.yaml: exit status 1 (29.119083ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-391000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-391000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000: exit status 7 (31.633375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-391000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000: exit status 7 (31.872208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p default-k8s-diff-port-391000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-391000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-391000 describe deploy/metrics-server -n kube-system: exit status 1 (27.606833ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-391000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-391000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000: exit status 7 (31.303541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p default-k8s-diff-port-391000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p default-k8s-diff-port-391000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3: exit status 80 (5.191870667s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-391000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "default-k8s-diff-port-391000" primary control-plane node in "default-k8s-diff-port-391000" cluster
	* Restarting existing qemu2 VM for "default-k8s-diff-port-391000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "default-k8s-diff-port-391000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:28:13.150115   16510 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:28:13.150260   16510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:28:13.150263   16510 out.go:304] Setting ErrFile to fd 2...
	I0320 17:28:13.150265   16510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:28:13.150418   16510 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:28:13.151442   16510 out.go:298] Setting JSON to false
	I0320 17:28:13.167660   16510 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8863,"bootTime":1710972030,"procs":519,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:28:13.167716   16510 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:28:13.173105   16510 out.go:177] * [default-k8s-diff-port-391000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:28:13.180975   16510 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:28:13.184108   16510 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:28:13.181030   16510 notify.go:220] Checking for updates...
	I0320 17:28:13.192059   16510 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:28:13.195053   16510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:28:13.198120   16510 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:28:13.201052   16510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:28:13.204394   16510 config.go:182] Loaded profile config "default-k8s-diff-port-391000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:28:13.204665   16510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:28:13.209083   16510 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:28:13.216022   16510 start.go:297] selected driver: qemu2
	I0320 17:28:13.216027   16510 start.go:901] validating driver "qemu2" against &{Name:default-k8s-diff-port-391000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernetes
Config:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-391000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:f
alse ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:28:13.216087   16510 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:28:13.218398   16510 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0320 17:28:13.218450   16510 cni.go:84] Creating CNI manager for ""
	I0320 17:28:13.218458   16510 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:28:13.218483   16510 start.go:340] cluster config:
	{Name:default-k8s-diff-port-391000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:default-k8s-diff-port-391000 Name
space:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:28:13.222869   16510 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:28:13.231097   16510 out.go:177] * Starting "default-k8s-diff-port-391000" primary control-plane node in "default-k8s-diff-port-391000" cluster
	I0320 17:28:13.235093   16510 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:28:13.235110   16510 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:28:13.235121   16510 cache.go:56] Caching tarball of preloaded images
	I0320 17:28:13.235191   16510 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:28:13.235197   16510 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:28:13.235267   16510 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/default-k8s-diff-port-391000/config.json ...
	I0320 17:28:13.235780   16510 start.go:360] acquireMachinesLock for default-k8s-diff-port-391000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:28:13.235811   16510 start.go:364] duration metric: took 23.625µs to acquireMachinesLock for "default-k8s-diff-port-391000"
	I0320 17:28:13.235822   16510 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:28:13.235828   16510 fix.go:54] fixHost starting: 
	I0320 17:28:13.235963   16510 fix.go:112] recreateIfNeeded on default-k8s-diff-port-391000: state=Stopped err=<nil>
	W0320 17:28:13.235972   16510 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:28:13.240062   16510 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-391000" ...
	I0320 17:28:13.247085   16510 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f9:70:3f:73:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/disk.qcow2
	I0320 17:28:13.249247   16510 main.go:141] libmachine: STDOUT: 
	I0320 17:28:13.249279   16510 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:28:13.249312   16510 fix.go:56] duration metric: took 13.483834ms for fixHost
	I0320 17:28:13.249318   16510 start.go:83] releasing machines lock for "default-k8s-diff-port-391000", held for 13.502292ms
	W0320 17:28:13.249328   16510 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:28:13.249368   16510 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:28:13.249374   16510 start.go:728] Will try again in 5 seconds ...
	I0320 17:28:18.251378   16510 start.go:360] acquireMachinesLock for default-k8s-diff-port-391000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:28:18.251792   16510 start.go:364] duration metric: took 279.292µs to acquireMachinesLock for "default-k8s-diff-port-391000"
	I0320 17:28:18.251912   16510 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:28:18.251934   16510 fix.go:54] fixHost starting: 
	I0320 17:28:18.252701   16510 fix.go:112] recreateIfNeeded on default-k8s-diff-port-391000: state=Stopped err=<nil>
	W0320 17:28:18.252729   16510 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:28:18.261700   16510 out.go:177] * Restarting existing qemu2 VM for "default-k8s-diff-port-391000" ...
	I0320 17:28:18.264897   16510 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/qemu.pid -device virtio-net-pci,netdev=net0,mac=be:f9:70:3f:73:cf -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/default-k8s-diff-port-391000/disk.qcow2
	I0320 17:28:18.274271   16510 main.go:141] libmachine: STDOUT: 
	I0320 17:28:18.274340   16510 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:28:18.274414   16510 fix.go:56] duration metric: took 22.4795ms for fixHost
	I0320 17:28:18.274433   16510 start.go:83] releasing machines lock for "default-k8s-diff-port-391000", held for 22.622125ms
	W0320 17:28:18.274600   16510 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-391000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p default-k8s-diff-port-391000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:28:18.281684   16510 out.go:177] 
	W0320 17:28:18.285778   16510 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:28:18.285798   16510 out.go:239] * 
	* 
	W0320 17:28:18.288419   16510 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:28:18.296687   16510 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p default-k8s-diff-port-391000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=qemu2  --kubernetes-version=v1.29.3": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000: exit status 7 (67.594375ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (5.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-arm64 start -p newest-cni-054000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p newest-cni-054000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0: exit status 80 (5.195839375s)

                                                
                                                
-- stdout --
	* [newest-cni-054000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	* Starting "newest-cni-054000" primary control-plane node in "newest-cni-054000" cluster
	* Restarting existing qemu2 VM for "newest-cni-054000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	* Restarting existing qemu2 VM for "newest-cni-054000" ...
	OUTPUT: 
	ERROR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:28:15.950427   16535 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:28:15.950552   16535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:28:15.950555   16535 out.go:304] Setting ErrFile to fd 2...
	I0320 17:28:15.950557   16535 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:28:15.950676   16535 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:28:15.951653   16535 out.go:298] Setting JSON to false
	I0320 17:28:15.967869   16535 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":8865,"bootTime":1710972030,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:28:15.967932   16535 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:28:15.972184   16535 out.go:177] * [newest-cni-054000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:28:15.980212   16535 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:28:15.980244   16535 notify.go:220] Checking for updates...
	I0320 17:28:15.987264   16535 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:28:15.988709   16535 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:28:15.992214   16535 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:28:15.996243   16535 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:28:16.002239   16535 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:28:16.005549   16535 config.go:182] Loaded profile config "newest-cni-054000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0320 17:28:16.005802   16535 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:28:16.010086   16535 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:28:16.017220   16535 start.go:297] selected driver: qemu2
	I0320 17:28:16.017225   16535 start.go:901] validating driver "qemu2" against &{Name:newest-cni-054000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-054000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> Expos
edPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:28:16.017284   16535 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:28:16.019567   16535 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0320 17:28:16.019610   16535 cni.go:84] Creating CNI manager for ""
	I0320 17:28:16.019617   16535 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:28:16.019639   16535 start.go:340] cluster config:
	{Name:newest-cni-054000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:newest-cni-054000 Namespace:default A
PIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:28:16.023974   16535 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:28:16.032244   16535 out.go:177] * Starting "newest-cni-054000" primary control-plane node in "newest-cni-054000" cluster
	I0320 17:28:16.036244   16535 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0320 17:28:16.036260   16535 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0320 17:28:16.036268   16535 cache.go:56] Caching tarball of preloaded images
	I0320 17:28:16.036334   16535 preload.go:173] Found /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0320 17:28:16.036340   16535 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0320 17:28:16.036418   16535 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/newest-cni-054000/config.json ...
	I0320 17:28:16.036943   16535 start.go:360] acquireMachinesLock for newest-cni-054000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:28:16.036971   16535 start.go:364] duration metric: took 21.541µs to acquireMachinesLock for "newest-cni-054000"
	I0320 17:28:16.036981   16535 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:28:16.036985   16535 fix.go:54] fixHost starting: 
	I0320 17:28:16.037116   16535 fix.go:112] recreateIfNeeded on newest-cni-054000: state=Stopped err=<nil>
	W0320 17:28:16.037125   16535 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:28:16.041266   16535 out.go:177] * Restarting existing qemu2 VM for "newest-cni-054000" ...
	I0320 17:28:16.049270   16535 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:b0:f2:03:05:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/disk.qcow2
	I0320 17:28:16.051421   16535 main.go:141] libmachine: STDOUT: 
	I0320 17:28:16.051447   16535 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:28:16.051480   16535 fix.go:56] duration metric: took 14.493833ms for fixHost
	I0320 17:28:16.051484   16535 start.go:83] releasing machines lock for "newest-cni-054000", held for 14.509584ms
	W0320 17:28:16.051493   16535 start.go:713] error starting host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:28:16.051534   16535 out.go:239] ! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	! StartHost failed, but will try again: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:28:16.051539   16535 start.go:728] Will try again in 5 seconds ...
	I0320 17:28:21.053600   16535 start.go:360] acquireMachinesLock for newest-cni-054000: {Name:mk37b0d6bdab55aab4dcaf80299e2853fe85e231 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0320 17:28:21.054075   16535 start.go:364] duration metric: took 338.417µs to acquireMachinesLock for "newest-cni-054000"
	I0320 17:28:21.054218   16535 start.go:96] Skipping create...Using existing machine configuration
	I0320 17:28:21.054242   16535 fix.go:54] fixHost starting: 
	I0320 17:28:21.054987   16535 fix.go:112] recreateIfNeeded on newest-cni-054000: state=Stopped err=<nil>
	W0320 17:28:21.055014   16535 fix.go:138] unexpected machine state, will restart: <nil>
	I0320 17:28:21.060454   16535 out.go:177] * Restarting existing qemu2 VM for "newest-cni-054000" ...
	I0320 17:28:21.068548   16535 main.go:141] libmachine: executing: /opt/socket_vmnet/bin/socket_vmnet_client /var/run/socket_vmnet qemu-system-aarch64 -M virt,highmem=off -cpu host -drive file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd,readonly=on,format=raw,if=pflash -display none -accel hvf -m 2200 -smp 2 -boot d -cdrom /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/boot2docker.iso -qmp unix:/Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/monitor,server,nowait -pidfile /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/qemu.pid -device virtio-net-pci,netdev=net0,mac=46:b0:f2:03:05:da -netdev socket,id=net0,fd=3 -daemonize /Users/jenkins/minikube-integration/17713-12248/.minikube/machines/newest-cni-054000/disk.qcow2
	I0320 17:28:21.078555   16535 main.go:141] libmachine: STDOUT: 
	I0320 17:28:21.078614   16535 main.go:141] libmachine: STDERR: Failed to connect to "/var/run/socket_vmnet": Connection refused
	
	I0320 17:28:21.078698   16535 fix.go:56] duration metric: took 24.460833ms for fixHost
	I0320 17:28:21.078713   16535 start.go:83] releasing machines lock for "newest-cni-054000", held for 24.616042ms
	W0320 17:28:21.078923   16535 out.go:239] * Failed to start qemu2 VM. Running "minikube delete -p newest-cni-054000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	* Failed to start qemu2 VM. Running "minikube delete -p newest-cni-054000" may fix it: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	I0320 17:28:21.086370   16535 out.go:177] 
	W0320 17:28:21.089534   16535 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: Failed to connect to "/var/run/socket_vmnet": Connection refused: exit status 1
	W0320 17:28:21.089556   16535 out.go:239] * 
	* 
	W0320 17:28:21.092123   16535 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:28:21.103450   16535 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-arm64 start -p newest-cni-054000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=qemu2  --kubernetes-version=v1.30.0-beta.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-054000 -n newest-cni-054000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-054000 -n newest-cni-054000: exit status 7 (71.051125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-054000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (5.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-391000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000: exit status 7 (33.414833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-391000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-391000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-391000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (26.925583ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-391000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-391000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000: exit status 7 (31.6745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p default-k8s-diff-port-391000 image list --format=json
start_stop_delete_test.go:304: v1.29.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.29.3",
- 	"registry.k8s.io/kube-controller-manager:v1.29.3",
- 	"registry.k8s.io/kube-proxy:v1.29.3",
- 	"registry.k8s.io/kube-scheduler:v1.29.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000: exit status 7 (31.621292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p default-k8s-diff-port-391000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-391000 --alsologtostderr -v=1: exit status 83 (43.023917ms)

                                                
                                                
-- stdout --
	* The control-plane node default-k8s-diff-port-391000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p default-k8s-diff-port-391000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:28:18.577386   16554 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:28:18.577564   16554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:28:18.577567   16554 out.go:304] Setting ErrFile to fd 2...
	I0320 17:28:18.577569   16554 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:28:18.577698   16554 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:28:18.577927   16554 out.go:298] Setting JSON to false
	I0320 17:28:18.577937   16554 mustload.go:65] Loading cluster: default-k8s-diff-port-391000
	I0320 17:28:18.578117   16554 config.go:182] Loaded profile config "default-k8s-diff-port-391000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:28:18.582083   16554 out.go:177] * The control-plane node default-k8s-diff-port-391000 host is not running: state=Stopped
	I0320 17:28:18.586071   16554 out.go:177]   To start a cluster, run: "minikube start -p default-k8s-diff-port-391000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p default-k8s-diff-port-391000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000: exit status 7 (31.615958ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-391000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000: exit status 7 (31.001708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-391000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-arm64 -p newest-cni-054000 image list --format=json
start_stop_delete_test.go:304: v1.30.0-beta.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.11.1",
- 	"registry.k8s.io/etcd:3.5.12-0",
- 	"registry.k8s.io/kube-apiserver:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-controller-manager:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-proxy:v1.30.0-beta.0",
- 	"registry.k8s.io/kube-scheduler:v1.30.0-beta.0",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-054000 -n newest-cni-054000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-054000 -n newest-cni-054000: exit status 7 (31.6215ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-054000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-arm64 pause -p newest-cni-054000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-arm64 pause -p newest-cni-054000 --alsologtostderr -v=1: exit status 83 (44.014708ms)

                                                
                                                
-- stdout --
	* The control-plane node newest-cni-054000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p newest-cni-054000"

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:28:21.295111   16586 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:28:21.295278   16586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:28:21.295282   16586 out.go:304] Setting ErrFile to fd 2...
	I0320 17:28:21.295284   16586 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:28:21.295406   16586 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:28:21.295633   16586 out.go:298] Setting JSON to false
	I0320 17:28:21.295641   16586 mustload.go:65] Loading cluster: newest-cni-054000
	I0320 17:28:21.295831   16586 config.go:182] Loaded profile config "newest-cni-054000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.30.0-beta.0
	I0320 17:28:21.300317   16586 out.go:177] * The control-plane node newest-cni-054000 host is not running: state=Stopped
	I0320 17:28:21.304456   16586 out.go:177]   To start a cluster, run: "minikube start -p newest-cni-054000"

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-arm64 pause -p newest-cni-054000 --alsologtostderr -v=1 failed: exit status 83
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-054000 -n newest-cni-054000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-054000 -n newest-cni-054000: exit status 7 (31.859875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-054000" host is not running, skipping log retrieval (state="Stopped")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-054000 -n newest-cni-054000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-054000 -n newest-cni-054000: exit status 7 (31.395167ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-054000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.11s)

                                                
                                    

Test pass (86/266)

Order passed test Duration
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.24
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.29.3/json-events 23.4
13 TestDownloadOnly/v1.29.3/preload-exists 0
16 TestDownloadOnly/v1.29.3/kubectl 0
17 TestDownloadOnly/v1.29.3/LogsDuration 0.08
18 TestDownloadOnly/v1.29.3/DeleteAll 0.24
19 TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds 0.23
21 TestDownloadOnly/v1.30.0-beta.0/json-events 27.23
22 TestDownloadOnly/v1.30.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.30.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.30.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.30.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds 0.23
30 TestBinaryMirror 0.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
44 TestHyperKitDriverInstallOrUpdate 9.88
48 TestErrorSpam/start 0.39
49 TestErrorSpam/status 0.1
50 TestErrorSpam/pause 0.14
51 TestErrorSpam/unpause 0.13
52 TestErrorSpam/stop 10.26
55 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/CacheCmd/cache/add_remote 6.26
64 TestFunctional/serial/CacheCmd/cache/add_local 1.17
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
66 TestFunctional/serial/CacheCmd/cache/list 0.04
69 TestFunctional/serial/CacheCmd/cache/delete 0.08
78 TestFunctional/parallel/ConfigCmd 0.24
80 TestFunctional/parallel/DryRun 0.29
81 TestFunctional/parallel/InternationalLanguage 0.12
87 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/License 1.41
103 TestFunctional/parallel/Version/short 0.04
110 TestFunctional/parallel/ImageCommands/Setup 5.61
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.08
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.13
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.14
135 TestFunctional/parallel/ProfileCmd/profile_list 0.11
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.11
141 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 10.04
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.17
144 TestFunctional/delete_addon-resizer_images 0.17
145 TestFunctional/delete_my-image_image 0.04
146 TestFunctional/delete_minikube_cached_images 0.04
175 TestJSONOutput/start/Audit 0
177 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
181 TestJSONOutput/pause/Audit 0
183 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/unpause/Audit 0
189 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/stop/Command 2.08
193 TestJSONOutput/stop/Audit 0
195 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
197 TestErrorJSONOutput 0.33
202 TestMainNoArgs 0.04
247 TestStoppedBinaryUpgrade/Setup 4.94
249 TestStoppedBinaryUpgrade/MinikubeLogs 0.79
260 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
266 TestNoKubernetes/serial/VerifyK8sNotRunning 0.05
267 TestNoKubernetes/serial/ProfileList 0.16
268 TestNoKubernetes/serial/Stop 2.1
270 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.05
284 TestStartStop/group/old-k8s-version/serial/Stop 2.93
285 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.12
295 TestStartStop/group/no-preload/serial/Stop 3.12
296 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.13
306 TestStartStop/group/embed-certs/serial/Stop 3.39
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.13
319 TestStartStop/group/default-k8s-diff-port/serial/Stop 3.39
320 TestStartStop/group/newest-cni/serial/DeployApp 0
321 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.06
322 TestStartStop/group/newest-cni/serial/Stop 3.16
323 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.13
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.13
331 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
332 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-293000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-293000: exit status 85 (99.096ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-293000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT |          |
	|         | -p download-only-293000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=qemu2                 |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/20 17:00:13
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0320 17:00:13.885054   12667 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:00:13.885208   12667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:00:13.885212   12667 out.go:304] Setting ErrFile to fd 2...
	I0320 17:00:13.885214   12667 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:00:13.885342   12667 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	W0320 17:00:13.885439   12667 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/17713-12248/.minikube/config/config.json: open /Users/jenkins/minikube-integration/17713-12248/.minikube/config/config.json: no such file or directory
	I0320 17:00:13.886688   12667 out.go:298] Setting JSON to true
	I0320 17:00:13.904604   12667 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7183,"bootTime":1710972030,"procs":524,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:00:13.904665   12667 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:00:13.910727   12667 out.go:97] [download-only-293000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:00:13.914624   12667 out.go:169] MINIKUBE_LOCATION=17713
	I0320 17:00:13.910862   12667 notify.go:220] Checking for updates...
	W0320 17:00:13.910898   12667 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball: no such file or directory
	I0320 17:00:13.922685   12667 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:00:13.930741   12667 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:00:13.933646   12667 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:00:13.936610   12667 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	W0320 17:00:13.942628   12667 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0320 17:00:13.942851   12667 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:00:13.945637   12667 out.go:97] Using the qemu2 driver based on user configuration
	I0320 17:00:13.945659   12667 start.go:297] selected driver: qemu2
	I0320 17:00:13.945676   12667 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:00:13.945763   12667 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:00:13.948616   12667 out.go:169] Automatically selected the socket_vmnet network
	I0320 17:00:13.953883   12667 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0320 17:00:13.953988   12667 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0320 17:00:13.954074   12667 cni.go:84] Creating CNI manager for ""
	I0320 17:00:13.954093   12667 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0320 17:00:13.954157   12667 start.go:340] cluster config:
	{Name:download-only-293000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-293000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:00:13.959395   12667 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:00:13.962692   12667 out.go:97] Downloading VM boot image ...
	I0320 17:00:13.962723   12667 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso.sha256 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/iso/arm64/minikube-v1.32.1-1710573846-18277-arm64.iso
	I0320 17:00:32.982565   12667 out.go:97] Starting "download-only-293000" primary control-plane node in "download-only-293000" cluster
	I0320 17:00:32.982606   12667 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0320 17:00:33.273944   12667 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0320 17:00:33.273996   12667 cache.go:56] Caching tarball of preloaded images
	I0320 17:00:33.274788   12667 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0320 17:00:33.282876   12667 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0320 17:00:33.282930   12667 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0320 17:00:33.872628   12667 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0320 17:00:53.933714   12667 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0320 17:00:53.933897   12667 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0320 17:00:54.633819   12667 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0320 17:00:54.634022   12667 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/download-only-293000/config.json ...
	I0320 17:00:54.634042   12667 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/download-only-293000/config.json: {Name:mk952b7b9982969111b4abe857bd1b66fc41d4c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:00:54.634307   12667 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0320 17:00:54.634504   12667 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/darwin/arm64/v1.20.0/kubectl
	I0320 17:00:54.987001   12667 out.go:169] 
	W0320 17:00:54.993212   12667 out_reason.go:110] Failed to cache kubectl: download failed: https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256: getter: &{Ctx:context.Background Src:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/arm64/kubectl.sha256 Dst:/Users/jenkins/minikube-integration/17713-12248/.minikube/cache/darwin/arm64/v1.20.0/kubectl.download Pwd: Mode:2 Umask:---------- Detectors:[0x108ad7520 0x108ad7520 0x108ad7520 0x108ad7520 0x108ad7520 0x108ad7520 0x108ad7520] Decompressors:map[bz2:0x1400091aea0 gz:0x1400091aea8 tar:0x1400091ae50 tar.bz2:0x1400091ae60 tar.gz:0x1400091ae70 tar.xz:0x1400091ae80 tar.zst:0x1400091ae90 tbz2:0x1400091ae60 tgz:0x1400091ae70 txz:0x1400091ae80 tzst:0x1400091ae90 xz:0x1400091aeb0 zip:0x1400091aec0 zst:0x1400091aeb8] Getters:map[file:0x1400210ab90 http:0x14000914500 https:0x140009145a0] Dir:false ProgressLis
tener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response code: 404
	W0320 17:00:54.993236   12667 out_reason.go:110] 
	W0320 17:00:55.001028   12667 out.go:229] ╭───────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                           │
	│    If the above advice does not help, please let us know:                                 │
	│    https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                           │
	│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────╯
	I0320 17:00:55.004100   12667 out.go:169] 
	
	
	* The control-plane node download-only-293000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-293000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-293000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/json-events (23.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-885000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-885000 --force --alsologtostderr --kubernetes-version=v1.29.3 --container-runtime=docker --driver=qemu2 : (23.397609791s)
--- PASS: TestDownloadOnly/v1.29.3/json-events (23.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/preload-exists
--- PASS: TestDownloadOnly/v1.29.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/kubectl
--- PASS: TestDownloadOnly/v1.29.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-885000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-885000: exit status 85 (81.903ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-293000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT |                     |
	|         | -p download-only-293000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT | 20 Mar 24 17:00 PDT |
	| delete  | -p download-only-293000        | download-only-293000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT | 20 Mar 24 17:00 PDT |
	| start   | -o=json --download-only        | download-only-885000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT |                     |
	|         | -p download-only-885000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=qemu2                 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/20 17:00:55
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0320 17:00:55.679231   12713 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:00:55.679352   12713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:00:55.679355   12713 out.go:304] Setting ErrFile to fd 2...
	I0320 17:00:55.679358   12713 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:00:55.679487   12713 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:00:55.680570   12713 out.go:298] Setting JSON to true
	I0320 17:00:55.697089   12713 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7225,"bootTime":1710972030,"procs":510,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:00:55.697153   12713 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:00:55.701839   12713 out.go:97] [download-only-885000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:00:55.705750   12713 out.go:169] MINIKUBE_LOCATION=17713
	I0320 17:00:55.701962   12713 notify.go:220] Checking for updates...
	I0320 17:00:55.711820   12713 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:00:55.718784   12713 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:00:55.721960   12713 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:00:55.725798   12713 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	W0320 17:00:55.732796   12713 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0320 17:00:55.732947   12713 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:00:55.735799   12713 out.go:97] Using the qemu2 driver based on user configuration
	I0320 17:00:55.735806   12713 start.go:297] selected driver: qemu2
	I0320 17:00:55.735810   12713 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:00:55.735845   12713 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:00:55.738822   12713 out.go:169] Automatically selected the socket_vmnet network
	I0320 17:00:55.744124   12713 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0320 17:00:55.744223   12713 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0320 17:00:55.744263   12713 cni.go:84] Creating CNI manager for ""
	I0320 17:00:55.744270   12713 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:00:55.744281   12713 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0320 17:00:55.744321   12713 start.go:340] cluster config:
	{Name:download-only-885000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.3 ClusterName:download-only-885000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:00:55.748690   12713 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:00:55.751821   12713 out.go:97] Starting "download-only-885000" primary control-plane node in "download-only-885000" cluster
	I0320 17:00:55.751829   12713 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:00:56.418758   12713 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:00:56.418842   12713 cache.go:56] Caching tarball of preloaded images
	I0320 17:00:56.419610   12713 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:00:56.425163   12713 out.go:97] Downloading Kubernetes v1.29.3 preload ...
	I0320 17:00:56.425188   12713 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0320 17:00:57.013168   12713 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.3/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4?checksum=md5:c0bb0715201da444334d968c298f45eb -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4
	I0320 17:01:12.737150   12713 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0320 17:01:12.737310   12713 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.3-docker-overlay2-arm64.tar.lz4 ...
	I0320 17:01:13.295994   12713 cache.go:59] Finished verifying existence of preloaded tar for v1.29.3 on docker
	I0320 17:01:13.296180   12713 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/download-only-885000/config.json ...
	I0320 17:01:13.296197   12713 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/download-only-885000/config.json: {Name:mk67c9d5eec36f8446704e62dc7edd14e73f7479 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:01:13.296426   12713 preload.go:132] Checking if preload exists for k8s version v1.29.3 and runtime docker
	I0320 17:01:13.296545   12713 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.3/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/darwin/arm64/v1.29.3/kubectl
	
	
	* The control-plane node download-only-885000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-885000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.3/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.29.3/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-885000
--- PASS: TestDownloadOnly/v1.29.3/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/json-events (27.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-arm64 start -o=json --download-only -p download-only-684000 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=qemu2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-arm64 start -o=json --download-only -p download-only-684000 --force --alsologtostderr --kubernetes-version=v1.30.0-beta.0 --container-runtime=docker --driver=qemu2 : (27.227986792s)
--- PASS: TestDownloadOnly/v1.30.0-beta.0/json-events (27.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-arm64 logs -p download-only-684000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-arm64 logs -p download-only-684000: exit status 85 (78.8225ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-293000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT |                     |
	|         | -p download-only-293000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT | 20 Mar 24 17:00 PDT |
	| delete  | -p download-only-293000             | download-only-293000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT | 20 Mar 24 17:00 PDT |
	| start   | -o=json --download-only             | download-only-885000 | jenkins | v1.32.0 | 20 Mar 24 17:00 PDT |                     |
	|         | -p download-only-885000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
	| delete  | -p download-only-885000             | download-only-885000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT | 20 Mar 24 17:01 PDT |
	| start   | -o=json --download-only             | download-only-684000 | jenkins | v1.32.0 | 20 Mar 24 17:01 PDT |                     |
	|         | -p download-only-684000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=qemu2                      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/20 17:01:19
	Running on machine: MacOS-M1-Agent-2
	Binary: Built with gc go1.22.1 for darwin/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0320 17:01:19.623418   12749 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:01:19.623544   12749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:01:19.623548   12749 out.go:304] Setting ErrFile to fd 2...
	I0320 17:01:19.623551   12749 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:01:19.623696   12749 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:01:19.624747   12749 out.go:298] Setting JSON to true
	I0320 17:01:19.640879   12749 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7249,"bootTime":1710972030,"procs":506,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:01:19.640943   12749 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:01:19.645843   12749 out.go:97] [download-only-684000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:01:19.649786   12749 out.go:169] MINIKUBE_LOCATION=17713
	I0320 17:01:19.645930   12749 notify.go:220] Checking for updates...
	I0320 17:01:19.658845   12749 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:01:19.661799   12749 out.go:169] MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:01:19.664776   12749 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:01:19.667840   12749 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	W0320 17:01:19.673761   12749 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0320 17:01:19.673957   12749 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:01:19.676705   12749 out.go:97] Using the qemu2 driver based on user configuration
	I0320 17:01:19.676715   12749 start.go:297] selected driver: qemu2
	I0320 17:01:19.676720   12749 start.go:901] validating driver "qemu2" against <nil>
	I0320 17:01:19.676776   12749 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0320 17:01:19.679688   12749 out.go:169] Automatically selected the socket_vmnet network
	I0320 17:01:19.685918   12749 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=0MB
	I0320 17:01:19.686021   12749 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0320 17:01:19.686062   12749 cni.go:84] Creating CNI manager for ""
	I0320 17:01:19.686071   12749 cni.go:158] "qemu2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0320 17:01:19.686077   12749 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0320 17:01:19.686121   12749 start.go:340] cluster config:
	{Name:download-only-684000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0-beta.0 ClusterName:download-only-684000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet St
aticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:01:19.690384   12749 iso.go:125] acquiring lock: {Name:mk21c6fb8716ef2f2d82d13f734b8f5d7d9ad65a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0320 17:01:19.694762   12749 out.go:97] Starting "download-only-684000" primary control-plane node in "download-only-684000" cluster
	I0320 17:01:19.694774   12749 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0320 17:01:20.358963   12749 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0320 17:01:20.359037   12749 cache.go:56] Caching tarball of preloaded images
	I0320 17:01:20.359882   12749 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0320 17:01:20.365297   12749 out.go:97] Downloading Kubernetes v1.30.0-beta.0 preload ...
	I0320 17:01:20.365343   12749 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0320 17:01:20.979948   12749 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0-beta.0/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4?checksum=md5:e2591d3d8d44bfdea8fdcdf9682f34bf -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4
	I0320 17:01:36.543750   12749 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0320 17:01:36.543928   12749 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-beta.0-docker-overlay2-arm64.tar.lz4 ...
	I0320 17:01:37.088147   12749 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0-beta.0 on docker
	I0320 17:01:37.088325   12749 profile.go:142] Saving config to /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/download-only-684000/config.json ...
	I0320 17:01:37.088340   12749 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/17713-12248/.minikube/profiles/download-only-684000/config.json: {Name:mkba499cd468bad0b11e9edcc7e75adf95c1ffde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0320 17:01:37.088542   12749 preload.go:132] Checking if preload exists for k8s version v1.30.0-beta.0 and runtime docker
	I0320 17:01:37.088661   12749 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0-beta.0/bin/darwin/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0-beta.0/bin/darwin/arm64/kubectl.sha256 -> /Users/jenkins/minikube-integration/17713-12248/.minikube/cache/darwin/arm64/v1.30.0-beta.0/kubectl
	
	
	* The control-plane node download-only-684000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-684000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-arm64 delete -p download-only-684000
--- PASS: TestDownloadOnly/v1.30.0-beta.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestBinaryMirror (0.34s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-arm64 start --download-only -p binary-mirror-025000 --alsologtostderr --binary-mirror http://127.0.0.1:52078 --driver=qemu2 
helpers_test.go:175: Cleaning up "binary-mirror-025000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p binary-mirror-025000
--- PASS: TestBinaryMirror (0.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p addons-367000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons enable dashboard -p addons-367000: exit status 85 (62.531334ms)

                                                
                                                
-- stdout --
	* Profile "addons-367000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-367000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-arm64 addons disable dashboard -p addons-367000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-arm64 addons disable dashboard -p addons-367000: exit status 85 (66.279042ms)

                                                
                                                
-- stdout --
	* Profile "addons-367000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-367000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (9.88s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (9.88s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 status: exit status 7 (32.763583ms)

                                                
                                                
-- stdout --
	nospam-044000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 status" failed: exit status 7
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 status: exit status 7 (32.278042ms)

                                                
                                                
-- stdout --
	nospam-044000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 status" failed: exit status 7
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 status: exit status 7 (31.729667ms)

                                                
                                                
-- stdout --
	nospam-044000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 status" failed: exit status 7
--- PASS: TestErrorSpam/status (0.10s)

                                                
                                    
x
+
TestErrorSpam/pause (0.14s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 pause: exit status 83 (42.375125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-044000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-044000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 pause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 pause: exit status 83 (50.069708ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-044000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-044000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 pause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 pause: exit status 83 (42.61125ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-044000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-044000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 pause" failed: exit status 83
--- PASS: TestErrorSpam/pause (0.14s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.13s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 unpause: exit status 83 (42.84275ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-044000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-044000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 unpause" failed: exit status 83
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 unpause: exit status 83 (40.88775ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-044000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-044000"

                                                
                                                
-- /stdout --
error_spam_test.go:161: "out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 unpause" failed: exit status 83
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 unpause: exit status 83 (42.912208ms)

                                                
                                                
-- stdout --
	* The control-plane node nospam-044000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p nospam-044000"

                                                
                                                
-- /stdout --
error_spam_test.go:184: "out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 unpause" failed: exit status 83
--- PASS: TestErrorSpam/unpause (0.13s)

                                                
                                    
x
+
TestErrorSpam/stop (10.26s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 stop: (4.026277083s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 stop: (3.275546708s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 stop
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-arm64 -p nospam-044000 --log_dir /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/nospam-044000 stop: (2.959605166s)
--- PASS: TestErrorSpam/stop (10.26s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/17713-12248/.minikube/files/etc/test/nested/copy/12665/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (6.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-829000 cache add registry.k8s.io/pause:3.1: (2.183098s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-829000 cache add registry.k8s.io/pause:3.3: (2.211012416s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-darwin-arm64 -p functional-829000 cache add registry.k8s.io/pause:latest: (1.861092375s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (6.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-829000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalserialCacheCmdcacheadd_local783869710/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 cache add minikube-local-cache-test:functional-829000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 cache delete minikube-local-cache-test:functional-829000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-829000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 config get cpus: exit status 14 (34.32875ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 config get cpus: exit status 14 (37.470042ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-829000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-829000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (164.505375ms)

                                                
                                                
-- stdout --
	* [functional-829000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the qemu2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:03:42.561635   13402 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:03:42.562096   13402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:03:42.562104   13402 out.go:304] Setting ErrFile to fd 2...
	I0320 17:03:42.562113   13402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:03:42.562337   13402 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:03:42.564195   13402 out.go:298] Setting JSON to false
	I0320 17:03:42.583848   13402 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7392,"bootTime":1710972030,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:03:42.583926   13402 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:03:42.589153   13402 out.go:177] * [functional-829000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	I0320 17:03:42.596177   13402 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:03:42.600069   13402 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:03:42.596264   13402 notify.go:220] Checking for updates...
	I0320 17:03:42.603116   13402 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:03:42.606129   13402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:03:42.609109   13402 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:03:42.612123   13402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:03:42.615496   13402 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:03:42.615791   13402 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:03:42.620080   13402 out.go:177] * Using the qemu2 driver based on existing profile
	I0320 17:03:42.627097   13402 start.go:297] selected driver: qemu2
	I0320 17:03:42.627103   13402 start.go:901] validating driver "qemu2" against &{Name:functional-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:03:42.627153   13402 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:03:42.634091   13402 out.go:177] 
	W0320 17:03:42.638100   13402 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0320 17:03:42.642060   13402 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-829000 --dry-run --alsologtostderr -v=1 --driver=qemu2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-arm64 start -p functional-829000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p functional-829000 --dry-run --memory 250MB --alsologtostderr --driver=qemu2 : exit status 23 (117.107625ms)

                                                
                                                
-- stdout --
	* [functional-829000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote qemu2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0320 17:03:42.806823   13413 out.go:291] Setting OutFile to fd 1 ...
	I0320 17:03:42.806947   13413 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:03:42.806950   13413 out.go:304] Setting ErrFile to fd 2...
	I0320 17:03:42.806953   13413 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0320 17:03:42.807084   13413 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/17713-12248/.minikube/bin
	I0320 17:03:42.808639   13413 out.go:298] Setting JSON to false
	I0320 17:03:42.825488   13413 start.go:129] hostinfo: {"hostname":"MacOS-M1-Agent-2.local","uptime":7392,"bootTime":1710972030,"procs":517,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.3.1","kernelVersion":"23.3.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"6baf25a2-d406-53a7-bb3a-d7da7f56bb59"}
	W0320 17:03:42.825580   13413 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0320 17:03:42.830145   13413 out.go:177] * [functional-829000] minikube v1.32.0 sur Darwin 14.3.1 (arm64)
	I0320 17:03:42.837086   13413 out.go:177]   - MINIKUBE_LOCATION=17713
	I0320 17:03:42.841077   13413 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	I0320 17:03:42.837126   13413 notify.go:220] Checking for updates...
	I0320 17:03:42.848115   13413 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-arm64
	I0320 17:03:42.852089   13413 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0320 17:03:42.855140   13413 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	I0320 17:03:42.858110   13413 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0320 17:03:42.861416   13413 config.go:182] Loaded profile config "functional-829000": Driver=qemu2, ContainerRuntime=docker, KubernetesVersion=v1.29.3
	I0320 17:03:42.861694   13413 driver.go:392] Setting default libvirt URI to qemu:///system
	I0320 17:03:42.866109   13413 out.go:177] * Utilisation du pilote qemu2 basé sur le profil existant
	I0320 17:03:42.873080   13413 start.go:297] selected driver: qemu2
	I0320 17:03:42.873087   13413 start.go:901] validating driver "qemu2" against &{Name:functional-829000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18277/minikube-v1.32.1-1710573846-18277-arm64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:qemu2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.29.3 ClusterName:functional-829000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.29.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network:socket_vmnet Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0320 17:03:42.873160   13413 start.go:912] status for qemu2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0320 17:03:42.879077   13413 out.go:177] 
	W0320 17:03:42.883072   13413 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0320 17:03:42.887123   13413 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-arm64 license
functional_test.go:2284: (dbg) Done: out/minikube-darwin-arm64 license: (1.407913041s)
--- PASS: TestFunctional/parallel/License (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (5.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (5.571320958s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-829000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (5.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-arm64 -p functional-829000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image rm gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-829000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 image save --daemon gcr.io/google-containers/addon-resizer:functional-829000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-829000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-arm64 profile list
functional_test.go:1311: Took "72.727875ms" to run "out/minikube-darwin-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-arm64 profile list -l
functional_test.go:1325: Took "35.577792ms" to run "out/minikube-darwin-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json
functional_test.go:1362: Took "71.904708ms" to run "out/minikube-darwin-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-arm64 profile list -o json --light
functional_test.go:1375: Took "36.642291ms" to run "out/minikube-darwin-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:351: (dbg) Done: dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.: (10.01254325s)
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (10.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-arm64 -p functional-829000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.17s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-829000
--- PASS: TestFunctional/delete_addon-resizer_images (0.17s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-829000
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-829000
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (2.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-arm64 stop -p json-output-503000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-arm64 stop -p json-output-503000 --output=json --user=testUser: (2.075639666s)
--- PASS: TestJSONOutput/stop/Command (2.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.33s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-arm64 start -p json-output-error-565000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p json-output-error-565000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (99.871083ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f48bbad8-38a4-4e1d-9842-55e9c6b5584a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-565000] minikube v1.32.0 on Darwin 14.3.1 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a9352d10-979f-45fa-bfa6-144671ced10e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17713"}}
	{"specversion":"1.0","id":"29c3c096-62f5-4787-b2af-9b1d815492b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig"}}
	{"specversion":"1.0","id":"475947b4-8777-426c-8455-0fd25d04f1b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-arm64"}}
	{"specversion":"1.0","id":"ecafda6b-0cc2-4250-bacb-69a74c4c7d7b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bfb2ea6c-ec99-4dde-b24c-41d29685bffb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube"}}
	{"specversion":"1.0","id":"f482de9a-352f-4e6d-92fc-058a30276841","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"cd6de1ad-539b-4b07-a9a0-69af6d2c5c2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-565000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p json-output-error-565000
--- PASS: TestErrorJSONOutput (0.33s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-arm64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-arm64 logs -p stopped-upgrade-184000
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-arm64 start -p NoKubernetes-342000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-arm64 start -p NoKubernetes-342000 --no-kubernetes --kubernetes-version=1.20 --driver=qemu2 : exit status 14 (98.680834ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-342000] minikube v1.32.0 on Darwin 14.3.1 (arm64)
	  - MINIKUBE_LOCATION=17713
	  - KUBECONFIG=/Users/jenkins/minikube-integration/17713-12248/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-arm64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/17713-12248/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-342000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-342000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (47.486834ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-342000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-342000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-arm64 stop -p NoKubernetes-342000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-arm64 stop -p NoKubernetes-342000: (2.100332875s)
--- PASS: TestNoKubernetes/serial/Stop (2.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-arm64 ssh -p NoKubernetes-342000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-arm64 ssh -p NoKubernetes-342000 "sudo systemctl is-active --quiet service kubelet": exit status 83 (49.21175ms)

                                                
                                                
-- stdout --
	* The control-plane node NoKubernetes-342000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p NoKubernetes-342000"

                                                
                                                
-- /stdout --
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p old-k8s-version-360000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p old-k8s-version-360000 --alsologtostderr -v=3: (2.929071791s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (2.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p old-k8s-version-360000 -n old-k8s-version-360000: exit status 7 (58.836625ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p old-k8s-version-360000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p no-preload-472000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p no-preload-472000 --alsologtostderr -v=3: (3.122027292s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (3.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p no-preload-472000 -n no-preload-472000: exit status 7 (58.073959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p no-preload-472000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p embed-certs-878000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p embed-certs-878000 --alsologtostderr -v=3: (3.387682917s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (3.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p embed-certs-878000 -n embed-certs-878000: exit status 7 (58.796708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p embed-certs-878000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p default-k8s-diff-port-391000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p default-k8s-diff-port-391000 --alsologtostderr -v=3: (3.391902042s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (3.39s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-arm64 addons enable metrics-server -p newest-cni-054000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-arm64 stop -p newest-cni-054000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-arm64 stop -p newest-cni-054000 --alsologtostderr -v=3: (3.163764708s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p default-k8s-diff-port-391000 -n default-k8s-diff-port-391000: exit status 7 (59.143208ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p default-k8s-diff-port-391000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-054000 -n newest-cni-054000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-arm64 status --format={{.Host}} -p newest-cni-054000 -n newest-cni-054000: exit status 7 (59.121291ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-arm64 addons enable dashboard -p newest-cni-054000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (24/266)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-829000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1900788738/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710979386565291000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1900788738/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710979386565291000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1900788738/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710979386565291000" to /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1900788738/001/test-1710979386565291000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (61.382542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.028792ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.351416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (86.661334ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.169416ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.812292ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.507708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:123: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "sudo umount -f /mount-9p": exit status 83 (48.637583ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:92: "out/minikube-darwin-arm64 -p functional-829000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-829000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdany-port1900788738/001:/mount-9p --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/any-port (9.42s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-829000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port86006464/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (65.917208ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (94.4ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (90.330708ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (88.61125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.295333ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (89.818ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T /mount-9p | grep 9p": exit status 83 (92.857459ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "sudo umount -f /mount-9p": exit status 83 (48.754542ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:232: "out/minikube-darwin-arm64 -p functional-829000 ssh \"sudo umount -f /mount-9p\"": exit status 83
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-829000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdspecific-port86006464/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (13.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (13.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-829000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4142090194/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-829000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4142090194/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-arm64 mount -p functional-829000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4142090194/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T" /mount1: exit status 83 (84.536458ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T" /mount1: exit status 83 (90.042709ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T" /mount1: exit status 83 (89.681625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T" /mount1: exit status 83 (83.924958ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T" /mount1: exit status 83 (93.300125ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T" /mount1: exit status 83 (88.619083ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-arm64 -p functional-829000 ssh "findmnt -T" /mount1: exit status 83 (91.42625ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-829000 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p functional-829000"

                                                
                                                
-- /stdout --
functional_test_mount_test.go:340: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-829000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4142090194/001:/mount1 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-829000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4142090194/001:/mount2 --alsologtostderr -v=1] ...
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-arm64 mount -p functional-829000 /var/folders/vk/m7_4p8zn1574p4fs17hx47100000gp/T/TestFunctionalparallelMountCmdVerifyCleanup4142090194/001:/mount3 --alsologtostderr -v=1] ...
--- SKIP: TestFunctional/parallel/MountCmd/VerifyCleanup (13.50s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-670000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-670000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-670000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-670000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-670000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-670000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-670000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-670000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-670000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-670000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-670000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-670000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-670000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-670000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-670000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-670000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-670000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-670000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-670000"

                                                
                                                
----------------------- debugLogs end: cilium-670000 [took: 2.283484375s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-670000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p cilium-670000
--- SKIP: TestNetworkPlugins/group/cilium (2.51s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-903000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-arm64 delete -p disable-driver-mounts-903000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.24s)

                                                
                                    
Copied to clipboard